requests get stuck when sending long prompts (already solved, but still don't know why?)

#18
by uv0xab - opened

I am using a H20 server to deploy the deepseek-r1-awq model. The --max-model-len is configured as 16384, all other vllm parameters are the same as suggested in the model card.

However, when the requested prompt is large (usually when it is more than 1000 tokens) the request get stuck. On the server side, through the logs we can figure the inferencing is still running at around 10 token/s, but on client side there is simply no response. This issue only happens when sending a new request (i.e. it does not break any existing responding request).

I accidentally solved this problem inspired by other discussions in this repo, i.e. using --quantization moe_wna16 but initially I do this only for performance optimization. And I have no idea why and how it is related to this problem.

Anyone has any ideas?

Cognitive Computations org

This may be related to the float16 overflow issue. moe_wna16 supports bfloat16, which doesn't have the overflow issue.

Sign up or log in to comment