Skip to content

Support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client #13196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

matteoserva
Copy link
Contributor

@matteoserva matteoserva commented Apr 29, 2025

This PR implements support for setting additional jinja parameters.
An example of this is enable_thinking in the Qwen3 models template.

Main features:

  • Setting jinja variables for command line using --chat_template_kwargs or the environment variable
  • Setting variables per request in the OAI compatible api using the chat_template_kwargs parameter
  • Compatibility with the VLLM API

Notice

Other info

The official template is still only partially compatible. I modified it to use only supported features.
It's here: https://pastebin.com/16ZpCLHk https://pastebin.com/GGuTbFRc
And should be loaded with llama-server --jinja --chat-template-file {template_file}

It fixes #13160 and #13189

Test it with:

  • enable_thinking=false. Expected: {"prompt":"\n<|im_start|>user\nGive me a short introduction to large language models.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"}
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5,
  "chat_template_kwargs": {"enable_thinking": false}
}'
  • enable_thinking=true
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5,
  "chat_template_kwargs": {"enable_thinking": true}
}'
  • enable_thinking undefined
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5
}'

@matteoserva matteoserva requested a review from ngxson as a code owner April 29, 2025 18:58
@matteoserva matteoserva marked this pull request as draft April 29, 2025 18:58
@rhjdvsgsgks
Copy link
Contributor

can you add chat_template_kwargs to cli argument as well?

@matteoserva
Copy link
Contributor Author

matteoserva commented Apr 30, 2025

can you add chat_template_kwargs to cli argument as well?

I added it. I tested it using updated command (You might want to check the escaping of the double quotes):
--chat_template_kwargs "{\"enable_thinking\":false}" --jinja --chat-template-file qwen/qwen3_template.txt

@matteoserva matteoserva force-pushed the enable_thinking branch 2 times, most recently from d1861c4 to 01b58b5 Compare April 30, 2025 15:58
@matteoserva matteoserva changed the title [RFC] handling jinja extra template kwargs (Qwen3 enable_thinking feature) Support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client Apr 30, 2025
@matteoserva matteoserva marked this pull request as ready for review April 30, 2025 15:59
@neolee
Copy link

neolee commented May 1, 2025

Very useful for Qwen3 series. +1 for this feature!

@xiaomi102
Copy link

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs
git clone --depth=1 -b enable_thinking https://github.com/matteoserva/llama.cpp
Screenshot 2025-05-09 061336

@celsowm
Copy link

celsowm commented May 9, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@matteoserva
Copy link
Contributor Author

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs git clone

This PR is implemented only for llama-server and its webui.

llama-cli has unresolved bugs that prevent me from enabling this feature.

@xiaomi102
Copy link

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs git clone

This PR is implemented only for llama-server and its webui.

llama-cli has unresolved bugs that prevent me from enabling this feature.

Hope you'll integrate it for the CLI environment soon, thanks!

@matteoserva
Copy link
Contributor Author

Hope you'll integrate it for the CLI environment soon, thanks!

I'll open a new PR soon for llama-cli. The code is ready but it's blocked by #13402 and #13404

@matteoserva matteoserva force-pushed the enable_thinking branch 2 times, most recently from d44d099 to 5b3de5d Compare May 15, 2025 14:09
@celsowm
Copy link

celsowm commented May 15, 2025

would be nice a enable_thinking checkbox or something like that on llama cpp webui too

@strawberrymelonpanda
Copy link
Contributor

strawberrymelonpanda commented May 15, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@celsowm Lack of eyes on this area would be my guess.

With 438 open PRs (many obsolete), I've kind of come to accept I'll need to pull in some PRs of interest to me when building.

@neolee
Copy link

neolee commented May 16, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@celsowm Lack of eyes on this area would be my guess.

With 438 open PRs (many obsolete), I've kind of come to accept I'll need to pull in some PRs of interest to me when building.

vLLM and SGLang have got this feature the first day Qwen3 released. At the same time many useful enhancement and fix PRs become obsolete just because of delay on merging here in llama.cpp community. Really sad about that.

@ggerganov ggerganov requested a review from ochafik May 16, 2025 05:28
@matteoserva matteoserva requested a review from ggerganov May 16, 2025 06:52
@Neath
Copy link

Neath commented May 16, 2025

This is so necessary when dealing with Qwen3! Can't wait to see this merged and be able to use the latest version with this <3

@ochafik
Copy link
Collaborator

ochafik commented May 25, 2025

  • While being able to pass kwargs will be very useful in general (🎉), I think for the particular case of enable_thinking we will probably want to special case it, since there's a few thinking models around, some of which force the thinking tag open (a common --disable-thinking flag could close their tags, and set the enable_thinking: false for Qwen3).

Why not accept this PR to get the general "pass kwargs to chat template" feature at first, then implement the enable_thinking flag using this feature?

@neolee I think disabling thoughts should be handled more manually, the enable_thinking extra arg works for Qwen3 only, sent out #13771 for review to handle the other supported thinking models.

I added the support also to common_chat_params_init_generic() in the latest commit. I don't know if it makes sense for the other specialized init functions.

@matteoserva Thanks! I think each and every code path should pass the extra context variables if we want to support generic use cases (although as I just noted in a review comment, we should be weary of requiring users to set custom key-values for features that would be better handled in a generic & unified way across templates & models; ideally most users won't need to inspect any templates, and the ones who do can trivially create their own template overrides anyway).

@matteoserva
Copy link
Contributor Author

Just to make explicit, the added value of this PR is that:

  • You can set variables per request instead of restarting the server with a different template
  • Compatibility with the VLLM api, so the same code works for both engines

@aviallon
Copy link
Contributor

Thanks for this PR @matteoserva, I'm cherry-picking it for my own use :)

Copy link
Collaborator

@ochafik ochafik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me after a few nits are addressed.

@ngxson would you mind giving it a final look and approving it?

@@ -73,6 +74,7 @@ struct common_chat_templates_inputs {
bool parallel_tool_calls = false;
bool extract_reasoning = true;
std::chrono::system_clock::time_point now = std::chrono::system_clock::now();
std::map<std::string, std::string> chat_template_kwargs;
Copy link
Collaborator

@ochafik ochafik May 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took the liberty to push an update in your branch with support for propagating the extra context for all the chat formats with slightly simpler code.

Could you please update the PR description mentioning the new alternative way to disable the thinking globally (#13771, and discussions on upcoming per-request mechanism for it: #13272), and adding a link to the VLLM API mentioning the new param has the same syntax (I only realized that from your last comment :-))

@matteoserva matteoserva marked this pull request as draft May 27, 2025 07:51
@matteoserva matteoserva marked this pull request as ready for review May 27, 2025 09:13
@matteoserva matteoserva requested a review from ochafik May 27, 2025 09:13
@matteoserva
Copy link
Contributor Author

I applied the requested changes but the latest rebase has been rough. Please review more carefully to make sure I didn't do any mistake (sorry).

@qinxuye
Copy link

qinxuye commented May 29, 2025

We do need this PR, maintainers please review this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misc. bug: Qwen 3.0 "enable_thinking" parameter not working