Skip to content

Commit d62b532

Browse files
authored
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
* Bump model_template to 16384 bytes to support larger chat templates. * Use `model->gguf_kv` for efficiency.
1 parent 081b29b commit d62b532

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

src/llama.cpp

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -22651,15 +22651,15 @@ int32_t llama_chat_apply_template(
2265122651
std::string curr_tmpl(tmpl == nullptr ? "" : tmpl);
2265222652
if (tmpl == nullptr) {
2265322653
GGML_ASSERT(model != nullptr);
22654-
// load template from model
22655-
std::vector<char> model_template(2048, 0); // longest known template is about 1200 bytes
22656-
std::string template_key = "tokenizer.chat_template";
22657-
int32_t res = llama_model_meta_val_str(model, template_key.c_str(), model_template.data(), model_template.size());
22658-
if (res < 0) {
22654+
22655+
// load template from model, if available
22656+
const auto & it = model->gguf_kv.find("tokenizer.chat_template");
22657+
if (it != model->gguf_kv.end() && it->second.size() > 0) {
22658+
curr_tmpl = it->second;
22659+
}
22660+
else {
2265922661
// worst case: there is no information about template, we will use chatml by default
22660-
curr_tmpl = "chatml"; // see llama_chat_apply_template_internal
22661-
} else {
22662-
curr_tmpl = std::string(model_template.data(), model_template.size());
22662+
curr_tmpl = "chatml"; // see llama_chat_apply_template_internal
2266322663
}
2266422664
}
2266522665

0 commit comments

Comments
 (0)