Mistral reasoning parser fails on startup with ValueError
Hello,
It looks like the mistral
reasoning parser was recently added to vLLM (v0.10.0).
I can see it in the code here:
https://github.com/vllm-project/vllm/blob/6d8d0a24c02bfd84d46b3016b865a44f048ae84b/vllm/reasoning/init.py
And vLLM confirms it's a valid choice if you provide a wrong input:
vllm serve: error: argument --reasoning-parser: invalid choice: 'mistral_' (choose from 'deepseek_r1', 'glm4_moe', 'granite', 'hunyuan_a13b', 'mistral', 'qwen3')
However, when running the server with the --reasoning-parser mistral
flag, the startup fails right after the model loads with this error:File "/vllm/env/lib/python3.12/site-packages/mistral_common/tokens/tokenizers/tekken.py", line 440, in get_control_token ERROR 07-28 16:32:50 [core.py:632] raise ValueError(f"Unknown control token {s}") ERROR 07-28 16:32:50 [core.py:632] ValueError: Unknown control token SpecialTokens.begin_think
Also, in a pull request by @juliendenize (https://github.com/mistralai/mistral-common/pull/122) he did not claim to add the reasoning parser, despite doing so, as it seems.
So that makes me wonder, what is the point of having the mistral
reasoning parser added if it doesn't seem to run on the only mistral reasoning afaik?
This seems like a very useful feature for getting reasoning content under a dedicated field, as it minimizes parsing errors. Other open-source models like Qwen3 have it implemented as well.
Is there a specific model version or dependency I'm missing? Thanks for any clarification.
Hi, are you sure both vLLM and mistral-common are up to date ? From the error message, looks like the latter is not.
Vllm is on the latest version:
pip show vllm
Name: vllm
Version: 0.10.0
and mistral common as well:
pip show mistral-common
Name: mistral_common
Version: 1.8.3