commit_hash
stringlengths 40
40
| pr_url
stringlengths 11
47
| has_lm_eval
bool 2
classes | has_performance
bool 2
classes | has_serving
bool 2
classes | has_general_test
bool 2
classes | test_details
stringlengths 4
563
| timeline_text
stringlengths 0
155k
| extracted_at
stringlengths 0
19
|
---|---|---|---|---|---|---|---|---|
baeded25699f9f4851843306f27f685c4d4ee7c5
|
https://github.com/vllm-project/vllm/pull/12601
| false | false | false | true |
TEST: test, ci, ci
|
Copy link Collaborator LucasWilkinson commented Jan 31, 2025 Based off of: #12528 that needs to land first Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 7 robertgshaw2-redhat, ywang96, gaocegege, mgoin, tlrmchlsmth, houseroad, and jovany-wang reacted with hooray emoji All reactions 🎉 7 reactions LucasWilkinson and others added 21 commits January 30, 2025 16:57 squashed commits … 27ad92c Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> fix VLLM_MLA_PERFORM_MATRIX_ABSORPTION=0 … c34e5ca Signed-off-by: Lucas Wilkinson <[email protected]> more cleanups … f2cac91 Signed-off-by: Lucas Wilkinson <[email protected]> Update utils.py … 068e672 Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> Update vllm/attention/backends/mla/utils.py … 31b802c Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> review comments … 634eee6 Signed-off-by: Lucas Wilkinson <[email protected]> renaming for consistency … 7487429 Signed-off-by: Lucas Wilkinson <[email protected]> Update vllm/config.py … d27826d Co-authored-by: Zhuohan Li <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> review comments … 8bdc14a Signed-off-by: Lucas Wilkinson <[email protected]> review comments … 09d814c Signed-off-by: Lucas Wilkinson <[email protected]> Update vllm/attention/backends/mla/utils.py … 4a46014 Co-authored-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> disable MLA for v3 for now … 0881475 Signed-off-by: Lucas Wilkinson <[email protected]> fix failing test … 37e39f4 Signed-off-by: Lucas Wilkinson <[email protected]> fix mypy … cfb2d26 Signed-off-by: Lucas Wilkinson <[email protected]> fix mypy … 5afc1bf Signed-off-by: Lucas Wilkinson <[email protected]> add cuda graph support … 54ba87d Signed-off-by: Lucas Wilkinson <[email protected]> ci fix … 31c34bf Signed-off-by: Lucas Wilkinson <[email protected]> Revert "add cuda graph support" … 433322b Signed-off-by: Lucas Wilkinson <[email protected]> Fix TP > 1 cuda graphs … f2b2500 Co-authored-by: Alexander Matveev <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> cleanup … 2d61054 Co-authored-by: Alexander Matveev <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> cleanup … 645622c Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson requested review from tlrmchlsmth , WoosukKwon , mgoin , robertgshaw2-redhat , zhuohan123 , youkaichao , alexm-redhat , comaniac and njhill as code owners January 31, 2025 04:18 35 hidden items Load more… mgoin approved these changes Feb 1, 2025 View reviewed changes vllm/model_executor/model_loader/loader.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . simon-mo and others added 2 commits February 1, 2025 00:56 Update loader.py … 0d66687 Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> format … 5fe1d1d Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson force-pushed the mla-fp8 branch
from 282eec1 to 5fe1d1d Compare February 1, 2025 00:57 LucasWilkinson added 2 commits February 1, 2025 01:13 reduce split kv amount … 5d5071c Signed-off-by: Lucas Wilkinson <[email protected]> fix none type error … 7ac6f52 Signed-off-by: Lucas Wilkinson <[email protected]> mgoin mentioned this pull request Feb 1, 2025 Disable chunked prefill and/or prefix caching when MLA is enabled #12638 Closed ci fix … dc0e2af Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson mentioned this pull request Feb 1, 2025 [Attention] MLA with chunked prefill #12639 Merged 4 tasks Hide details View details simon-mo merged commit baeded2 into vllm-project : main Feb 1, 2025 42 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … c22f65d )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Signed-off-by: Isotr0py <[email protected]> srikanthsrnvs pushed a commit
to srikanthsrnvs/vllm
that referenced
this pull request Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … bb94260 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> Syst3m1cAn0maly mentioned this pull request Feb 3, 2025 [Bug]: MLA Warnings when using FP8 KV cache in v0.7.1 #12680 Closed 1 task sahelib25 pushed a commit
to krai/vllm
that referenced
this pull request Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … 06f14ab )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> xuechendi referenced
this pull request
in yangw1234/habana-vllm-fork Feb 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute (#12601) … baf04c8 This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> houseroad mentioned this pull request Feb 4, 2025 DeepSeek: MLA attention pytorch/pytorch#146330 Open NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … 6bb84bb )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … bd83b50 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> gshtras reviewed Feb 14, 2025 View reviewed changes vllm/attention/backends/mla/utils.py def get_scale_group_shapes_for_fp8(layer: LinearBase) -> \ Tuple[Tuple[int, int], Tuple[int, int]]: if isinstance(layer.quant_method, Fp8LinearMethod): if layer.quant_method.block_quant is not None: Copy link Collaborator gshtras Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Fp8LinearMethod.block_quant is a boolean, is there meant to be a check for False instead? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member mgoin Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yes this is a bug, I fixed it here #13181 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LucasWilkinson mentioned this pull request Feb 25, 2025 Implement MLA for deepseek v3/r1 #12597 Closed yangulei pushed a commit
to yangulei/vllm-fork
that referenced
this pull request Mar 11, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … b339458 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Attention] Deepseek v3 MLA support with FP8 compute ( vllm-project#12601 … 28320d1 )
This PR implements the Deepseek V3 support by performing matrix absorption the fp8 weights
---------
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:44
|
fc542144c4477ffec1d3de6fa43e54f8fb5351e8
|
https://github.com/vllm-project/vllm/pull/12563
| false | true | false | true |
PERF: tok/s, tok/s, optimization | TEST: test, CI, CI
|
Copy link Contributor xpbowler commented Jan 29, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . [Guided decoding performance optimization] Sending the guided decoding bitmask in xgrammar to the GPU ( self.token_bitmask.to(scores.device) ) is a blocking operation that prevents the CPU from pre-launching the sampler kernels. The CPU waits until decode is complete, then copies the bitmask over. This PR changes the operation to async via setting non-blocking=True . (Current) The CPU is blocked on a cudaStreamSynchronize and only pre-empts the sampling kernels after bitmask application. Below is the Nsys profile for one decode phase from Llama 3.1 8B. With the optimization, this is no longer the case: Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions xpbowler requested a review
from mgoin as a code owner January 29, 2025 21:16 Copy link github-actions bot commented Jan 29, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . xpbowler force-pushed the main branch
from e91e01a to 99611c5 Compare January 29, 2025 21:26 mgoin approved these changes Jan 29, 2025 View reviewed changes Copy link Member mgoin left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This makes sense, thanks! LGTM pending green CI Showing the profile is great, also showing an e2e speedup (even if small) would be nice Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added structured-output ready ONLY add when PR is ready to merge/full CI is needed labels Jan 29, 2025 Copy link Contributor Author xpbowler commented Jan 29, 2025 This makes sense, thanks! LGTM pending green CI Showing the profile is great, also showing an e2e speedup (even if small) would be nice For single request benchmarks with Llama 3.1 8B running on H100, the improvement in tok/s was ~5%: Single request 87.5tok/s, guided unoptimized 92 tok/s, guided optimized 🚀 2 mgoin and njhill reacted with rocket emoji All reactions 🚀 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin added
the performance Performance-related issues label Jan 29, 2025 aarnphm approved these changes Jan 29, 2025 View reviewed changes xpbowler force-pushed the main branch
from 9bae63f to b9681d4 Compare January 30, 2025 15:40 mgoin enabled auto-merge (squash) January 30, 2025 22:16 Ryan N added 3 commits January 31, 2025 20:26 remove blocking bitmask memcpy … 4a3d85f Signed-off-by: Ryan N <[email protected]> re-run ci pipeline … a7914a8 Signed-off-by: Ryan N <[email protected]> pipeline … f8fa0c6 Signed-off-by: Ryan N <[email protected]> auto-merge was automatically disabled January 31, 2025 20:27 Head branch was pushed to by a user without write access xpbowler force-pushed the main branch
from b11a83f to f8fa0c6 Compare January 31, 2025 20:27 Hide details View details simon-mo merged commit fc54214 into vllm-project : main Jan 31, 2025 38 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … df7ab19 …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]>
Signed-off-by: Isotr0py <[email protected]> srikanthsrnvs pushed a commit
to srikanthsrnvs/vllm
that referenced
this pull request Feb 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … d27e55d …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> sahelib25 pushed a commit
to krai/vllm
that referenced
this pull request Feb 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … 51f5127 …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … 5c21ca9 …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … bea306f …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … 76bd88f …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:50
|
fa63e710c7fbaae3a445f669d3b5ba6b9a4ef412
|
https://github.com/vllm-project/vllm/pull/12094
| false | true | true | true |
PERF: Throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, test
|
Copy link Contributor youngkent commented Jan 15, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . We do some runner bookkeeping CPU operations after decoding iteration. We could parallelize some bookkeeping work while waiting on cuda sync. After the cuda sync, we only need to do simple and fast updates. The change should reduce scheduling overhead between decode iterations by ~20%. (See attached gpu trace) Before the optimization, After the optimization, E2E latency benchmark, ran VLLM_USE_V1=1 python3 benchmarks/benchmark_latency.py --model "/data/users/ktong/llama/llm_8b_oss" --tensor-parallel-size 1 --input_len 1000 --batch_size 32 Output (1-2% e2e latency reduction): Avg latency: 2.338167402730323 seconds 10% percentile latency: 2.3207896508742123 seconds 25% percentile latency: 2.3264574960339814 seconds 50% percentile latency: 2.3333765944698825 seconds 75% percentile latency: 2.343035737867467 seconds 90% percentile latency: 2.3567665563430635 seconds 99% percentile latency: 2.3934816433605737 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 3 WoosukKwon, ywang96, and njhill reacted with rocket emoji All reactions 🚀 3 reactions youngkent requested review from WoosukKwon , njhill , ywang96 and comaniac as code owners January 15, 2025 19:16 Copy link github-actions bot commented Jan 15, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin requested a review
from robertgshaw2-redhat January 15, 2025 20:20 youngkent added 3 commits January 15, 2025 12:28 reduce scheduling overhead in model runner after cuda sync … ff21f9e Signed-off-by: Keyun Tong <[email protected]> Fix style … 41dba06 Signed-off-by: Keyun Tong <[email protected]> fix style typo … 9ce3d6e Signed-off-by: Keyun Tong <[email protected]> youngkent force-pushed the main branch
from 4dc567b to 9ce3d6e Compare January 15, 2025 20:29 youkaichao reviewed Jan 16, 2025 View reviewed changes vllm/v1/outputs.py @@ -8,7 +8,7 @@ class SamplerOutput: # [num_reqs] sampled_token_ids: List[int] Copy link Member youkaichao Jan 16, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment is this necessary? iirc, @tlrmchlsmth use List[int] because they are cheaper to serialize, and would benefit tensor parallel case, where we need to pass them across processes. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Jan 16, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This is true — I didn’t look at how it impacts the non-TP case though Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The ModelRunnerOutput is what we serialize for TP, we don't serialize the SamplerOutput directly, so this is not a concern Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Jan 25, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Ah, yep that's right -- I did change this line in #9856 , but that was just downstream of changing sampled_token_ids to a List in the ModelRunnerOutput . This looks good to me since that's left as-is! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator robertgshaw2-redhat commented Jan 16, 2025 Wow, great idea. Im going to run some perfomance analysis on this tomorrow. 👍 1 youngkent reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon reviewed Jan 16, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill reviewed Jan 16, 2025 View reviewed changes vllm/v1/sample/sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill mentioned this pull request Jan 17, 2025 [V1] Logprobs and prompt logprobs support #9880 Merged remove outdated comment … 8ca382d Signed-off-by: Keyun Tong <[email protected]> youngkent force-pushed the main branch
from dfd825e to 8ca382d Compare January 17, 2025 18:15 youkaichao reviewed Jan 18, 2025 View reviewed changes vllm/v1/outputs.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . youkaichao reviewed Jan 18, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Merge branch 'main' into youngkent/main 1cc6492 WoosukKwon requested a review
from alexm-redhat as a code owner January 25, 2025 22:08 WoosukKwon approved these changes Jan 25, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Thanks for discovering and fixing this! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py self.input_batch.req_ids[:num_reqs]), "req_ids contains None" req_ids = cast(List[str], self.input_batch.req_ids[:num_reqs]) # NOTE: GPU -> CPU Sync happens here. Copy link Collaborator WoosukKwon Jan 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Just for a record: If top-p or top-k sampling is used (with the FlashInfer kernel), CPU-GPU synchronization happens inside the sampler at vllm/vllm/v1/sample/ops/topk_topp_sampler.py Lines 193 to 194
in 324960a # NOTE: CPU-GPU synchronization happens here. if not success . all (): Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 youngkent reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do you think we can avoid this in a follow up PR? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Jan 26, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I don't think we can. This is a fundamental limitation of the kernel (or the algorithm itself). The rejection sampling method cannot 100% guarantee the success. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 25, 2025 tlrmchlsmth approved these changes Jan 25, 2025 View reviewed changes robertgshaw2-redhat reviewed Jan 25, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py # NOTE: GPU -> CPU Sync happens here. # Move as many CPU operations as possible before this sync point. sampled_token_ids = sampler_output.sampled_token_ids.tolist() Copy link Collaborator robertgshaw2-redhat Jan 25, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It might be faster to do sampler_output.sampled_token_ids.cpu() and then sampler_output.sampled_token_ids[i].item()` in the inner loop. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Jan 26, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment In my experience, item() took considerable time so should be avoided. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 robertgshaw2-redhat reacted with thumbs up emoji All reactions 👍 1 reaction mgoin approved these changes Jan 25, 2025 View reviewed changes Merge remote-tracking branch 'upstream/main' f35e80b Hide details View details WoosukKwon merged commit fa63e71 into vllm-project : main Jan 26, 2025 42 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented Jan 26, 2025 @youngkent Thanks for the PR! This change helps vLLM's performance noticeably. ❤️ 1 youngkent reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tjtanaa pushed a commit
to EmbeddedLLM/vllm
that referenced
this pull request Jan 28, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … 4388fac …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]> rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … 4a21854 …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … 0442131 …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Isotr0py <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 3, 2025 [MFM-2025-02-03] Merge Main to llama fp8; With Faster ROCm Paged Atte… … 479b843 …ntion ( #399 )
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 5, 2025 [Bug Fix] Missing vllm.envs ( #405 ) … 87b3c56 * [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] included missing environment variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … 42bfed0 …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … 75d4b32 …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 19, 2025 [FEAT] [AITER] Support AITER operators: Fused MoE, Linear, Norm ( #436 ) … 4c8c86d * [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* Integrated ater: kvcache pa gemm rmsnorm
* fix pa
* fix
* replace topk softmax
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* replace fp moe kernel with aiter kernel
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* change ater to aiter
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config
* Applying scales rename to fp8 config ( #387 )
* Update Dockerfile.rocm
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Using aiter moe kernel
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* fix pa copy
* pa update
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* add fp16 pa support for aiter
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* aiter build instructions
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Copy to the right path
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Add env var toggles to disable AITER MoE or PA (both by default on)
* Update accuracy benchmark for batch size > 1
* Add a few more AITER toggles for norm and linear layers
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Public aiter repo
* Fail if aiter build failed silently
* Aiter can only be built on MI300x
* Typo fix
* Aiter PA off by default
* Changes to support updated aiter FP8 PA
* Support FP8 and INT8 KV cache according to ROCm/aiter#90 * add moe weight shuffle for dynamic quant and unquantized path
Signed-off-by: charlifu <[email protected]>
* Use FP16-native PA after support in ROCm/aiter#97 * Fix: Use FP8 pertoken quantize if KV cache dtype is FP8
* revert rocm_flash_attn.py line 883
* Don't enable by default to use an RC for main vllm-dev docker
* use ck moe for bf16 and fp16 fused_moe
* Merge remote-tracking branch 'origin/aiter_intergration_final' into merge-aiter-llama-fp8
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] include moe shuffle env variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: charlifu <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: amd-ruitang3 <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: chenjun <[email protected]>
Co-authored-by: ValarLip <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Matthew Wong <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: charlifu <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( … … bdf42bf …vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:54
|
6dd94dbe94c1820a1e224cba65efcf0befa97995
|
https://github.com/vllm-project/vllm/pull/12380
| false | true | true | true |
PERF: throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, test
|
Copy link Member youkaichao commented Jan 24, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . When I made the PR #12253 , I though self.decode_only = True is whether this model is decoder-only model, and therefore it is static. However, it turns out this field means if the current batch is decode only batch (so that we can use cudagraph). The bug makes every batch use previous batch's self.decode_only value, which is set to False when the batch contains prefill. Moving this line into prepare function (which is executed for every batch) solves the perf regression. test command: python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-8B --load-format dummy main branch: Avg latency: 1.1250504679279403 seconds
10% percentile latency: 1.1177026848774403 seconds
25% percentile latency: 1.1233553139027208 seconds
50% percentile latency: 1.1258818825008348 seconds
75% percentile latency: 1.127114001486916 seconds
90% percentile latency: 1.1292839918518438 seconds
99% percentile latency: 1.1434868656494654 seconds after this PR: Avg latency: 1.0009459006755301 seconds
10% percentile latency: 1.0002478279871867 seconds
25% percentile latency: 1.0005546582397074 seconds
50% percentile latency: 1.001000543939881 seconds
75% percentile latency: 1.0012907102354802 seconds
90% percentile latency: 1.00162893619854 seconds
99% percentile latency: 1.0022530709696003 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions fix perf … b900f08 Signed-off-by: youkaichao <[email protected]> Copy link github-actions bot commented Jan 24, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . add comments … 9ace57c Signed-off-by: youkaichao <[email protected]> comaniac approved these changes Jan 24, 2025 View reviewed changes comaniac added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 24, 2025 Copy link Collaborator yeqcharlotte commented Jan 24, 2025 @youkaichao Thanks for putting up the fix quickly! Confirmed the e2e throughput and latency is back to normal after this PR. 👍 2 youkaichao and houseroad reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details youkaichao merged commit 6dd94db into vllm-project : main Jan 24, 2025 12 of 18 checks passed Uh oh! There was an error while loading. Please reload this page . youkaichao deleted the fix_perf branch January 24, 2025 03:34 This was referenced Jan 24, 2025 Revert "[core] separate builder init and builder prepare for each batch" #12377 Closed Release v0.7.0 #12365 Closed tjtanaa pushed a commit
to EmbeddedLLM/vllm
that referenced
this pull request Jan 28, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … 404466b Signed-off-by: youkaichao <[email protected]> rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … 924ae96 Signed-off-by: youkaichao <[email protected]> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … 1af1584 Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Isotr0py <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 3, 2025 [MFM-2025-02-03] Merge Main to llama fp8; With Faster ROCm Paged Atte… … 479b843 …ntion ( #399 )
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 5, 2025 [Bug Fix] Missing vllm.envs ( #405 ) … 87b3c56 * [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] included missing environment variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … da3ba9f Signed-off-by: youkaichao <[email protected]> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … 75c53e3 Signed-off-by: youkaichao <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 19, 2025 [FEAT] [AITER] Support AITER operators: Fused MoE, Linear, Norm ( #436 ) … 4c8c86d * [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* Integrated ater: kvcache pa gemm rmsnorm
* fix pa
* fix
* replace topk softmax
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* replace fp moe kernel with aiter kernel
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* change ater to aiter
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config
* Applying scales rename to fp8 config ( #387 )
* Update Dockerfile.rocm
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Using aiter moe kernel
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* fix pa copy
* pa update
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* add fp16 pa support for aiter
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* aiter build instructions
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Copy to the right path
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Add env var toggles to disable AITER MoE or PA (both by default on)
* Update accuracy benchmark for batch size > 1
* Add a few more AITER toggles for norm and linear layers
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Public aiter repo
* Fail if aiter build failed silently
* Aiter can only be built on MI300x
* Typo fix
* Aiter PA off by default
* Changes to support updated aiter FP8 PA
* Support FP8 and INT8 KV cache according to ROCm/aiter#90 * add moe weight shuffle for dynamic quant and unquantized path
Signed-off-by: charlifu <[email protected]>
* Use FP16-native PA after support in ROCm/aiter#97 * Fix: Use FP8 pertoken quantize if KV cache dtype is FP8
* revert rocm_flash_attn.py line 883
* Don't enable by default to use an RC for main vllm-dev docker
* use ck moe for bf16 and fp16 fused_moe
* Merge remote-tracking branch 'origin/aiter_intergration_final' into merge-aiter-llama-fp8
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] include moe shuffle env variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: charlifu <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: amd-ruitang3 <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: chenjun <[email protected]>
Co-authored-by: ValarLip <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Matthew Wong <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: charlifu <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 ) … 527f2b8 Signed-off-by: youkaichao <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:46:57
|
aea94362c9bdd08ed2b346701bdc09d278e85f66
|
https://github.com/vllm-project/vllm/pull/12287
| true | true | true | true |
LM_EVAL: lm-eval, lm_eval, gsm8k | PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, vllm serve, Serving | TEST: test, test, test
|
Copy link Member njhill commented Jan 21, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . These help in particular with TTFT, ITL variance, and overall throughput. Break up output processing (detokenization) to avoid blocking the event loop for too long Freeze the heap after startup to reduce GC overhead/pauses Optimize a couple of CPU hotspots seen during profiling Benchmark on A100: VLLM_USE_V1=1 vllm serve meta-llama/Llama-3.2-1B-Instruct --disable-log-requests --port 8001 --max-num-batched-tokens 8192 --no-enable-prefix-caching --uvicorn-log-level=error python benchmarks/benchmark_serving.py \
--backend vllm \
--model meta-llama/Llama-3.2-1B-Instruct \
--dataset-name sharegpt \
--dataset-path ShareGPT_V3_unfiltered_cleaned_split.json \
--ignore-eos \
--port 8001 \
--save-result \
--result-dir results \
--result-filename test.json \
--num-prompts 6000 \
--request-rate inf \
--max-concurrency=400 Before: ============ Serving Benchmark Result ============
Successful requests: 6000
Benchmark duration (s): 94.31
Total input tokens: 1350511
Total generated tokens: 1211959
Request throughput (req/s): 63.62
Output token throughput (tok/s): 12850.45
Total Token throughput (tok/s): 27169.98
---------------Time to First Token----------------
Mean TTFT (ms): 229.23
Median TTFT (ms): 158.08
P99 TTFT (ms): 1050.70
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 30.02
Median TPOT (ms): 29.64
P99 TPOT (ms): 68.90
---------------Inter-token Latency----------------
Mean ITL (ms): 28.77
Median ITL (ms): 23.19
P99 ITL (ms): 386.30
================================================== After: ============ Serving Benchmark Result ============
Successful requests: 6000
Benchmark duration (s): 88.60
Total input tokens: 1350511
Total generated tokens: 1211959
Request throughput (req/s): 67.72
Output token throughput (tok/s): 13679.34
Total Token throughput (tok/s): 28922.50
---------------Time to First Token----------------
Mean TTFT (ms): 197.34
Median TTFT (ms): 168.03
P99 TTFT (ms): 1059.55
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 28.30
Median TPOT (ms): 27.75
P99 TPOT (ms): 47.38
---------------Inter-token Latency----------------
Mean ITL (ms): 26.64
Median ITL (ms): 24.38
P99 ITL (ms): 65.19
================================================== Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 9 jeejeelee, comaniac, simon-mo, WoosukKwon, ywang96, robertgshaw2-redhat, mgoin, drikster80, and nickandbro reacted with heart emoji 🚀 1 tlrmchlsmth reacted with rocket emoji All reactions ❤️ 9 reactions 🚀 1 reaction njhill requested review from WoosukKwon , robertgshaw2-redhat , ywang96 , comaniac and alexm-redhat as code owners January 21, 2025 23:38 Copy link github-actions bot commented Jan 21, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the frontend label Jan 21, 2025 [Frontend][V1] Online serving performance improvements … 55dd119 These help in particular with TTFT, and ITL variance. Overall throughput doesn't change much.
- Break up output processing (detokenization) to avoid blocking the event loop for too long
- Freeze the heap after startup to reduce GC overhead/pauses
- Optimize a couple of CPU hotspots seen during profiling
Signed-off-by: Nick Hill <[email protected]> njhill force-pushed the v1-perf-smoothing branch
from cfc5705 to 55dd119 Compare January 21, 2025 23:39 njhill commented Jan 22, 2025 View reviewed changes vllm/entrypoints/openai/protocol.py @@ -42,23 +42,31 @@ class OpenAIBaseModel(BaseModel): # OpenAI API does allow extra fields model_config = ConfigDict(extra="allow") # Cache class field names field_names: ClassVar[Optional[Set[str]]] = None Copy link Member Author njhill Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment There was noticeable overhead creating this set every time one of these objects is instantiated. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 mgoin, DarkLight1337, and ywang96 reacted with thumbs up emoji All reactions 👍 3 reactions vllm/v1/request.py def output_token_ids ( self ) -> ConstantList [ int ]: # Prevent directly appending to the output_token_ids since # all_token_ids should also be updated simultaneously. return ConstantList ( self . _output_token_ids ) Copy link Member Author njhill Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Avoid constructing these objects every time the properties are accessed. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 WoosukKwon and DarkLight1337 reacted with thumbs up emoji All reactions 👍 2 reactions Copy link Collaborator WoosukKwon Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice catch! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member mgoin Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I actually thought properties were cached after the first call, nice call Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member DarkLight1337 Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I actually thought properties were cached after the first call, nice call That would involve the use of cached_property . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 mgoin and njhill reacted with thumbs up emoji All reactions 👍 2 reactions Parallelize output socket IO on client side … 0e92b61 Signed-off-by: Nick Hill <[email protected]> Copy link Collaborator robertgshaw2-redhat commented Jan 22, 2025 Wow, the impact on P99 ITL is crazy. 🚀 1 mgoin reacted with rocket emoji All reactions 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat reviewed Jan 22, 2025 View reviewed changes vllm/entrypoints/openai/api_server.py # Mark the startup heap as static so that it's ignored by GC. # Reduces pause times of oldest generation collections. gc.collect() gc.freeze() Copy link Collaborator robertgshaw2-redhat Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do we need to call unfreeze at some point? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author njhill Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment No, this is mostly static stuff that will be around for the lifetime of the process anyhow. https://www.rippling.com/blog/the-garbage-collector-fights-back Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author njhill commented Jan 22, 2025 Combining with #12298 and increasing the max output processing chunk size to 256 gets higher throughput at the cost of slightly more latency variance. Since the benchmark I've been running is 400 concurrent requests, the 256 chunk size essentially just means those will be split into two chunks of ~400. If I disable the chunking completely, the throughput increases to 80 req/sec (with the coalescing), but the inter-response latencies become larger and more uneven. ============ Serving Benchmark Result ============
Successful requests: 6000
Benchmark duration (s): 84.70
Total input tokens: 1350511
Total generated tokens: 1211959
Request throughput (req/s): 70.84
Output token throughput (tok/s): 14308.94
Total Token throughput (tok/s): 30253.69
---------------Time to First Token----------------
Mean TTFT (ms): 198.28
Median TTFT (ms): 166.40
P99 TTFT (ms): 1128.75
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 26.76
Median TPOT (ms): 26.05
P99 TPOT (ms): 50.04
---------------Inter-token Latency----------------
Mean ITL (ms): 29.41
Median ITL (ms): 26.83
P99 ITL (ms): 75.34
================================================== All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author njhill commented Jan 22, 2025 It would probably be good to also make OUTPUT_PROCESSING_CHUNK_SIZE overridable via an env var. 👍 2 mgoin and ywang96 reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin reviewed Jan 22, 2025 View reviewed changes vllm/v1/engine/output_processor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/request.py def output_token_ids ( self ) -> ConstantList [ int ]: # Prevent directly appending to the output_token_ids since # all_token_ids should also be updated simultaneously. return ConstantList ( self . _output_token_ids ) Copy link Member mgoin Jan 22, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I actually thought properties were cached after the first call, nice call Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions ywang96 reviewed Jan 22, 2025 View reviewed changes vllm/v1/engine/async_llm.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill added 2 commits January 22, 2025 08:56 Make max processing chunk size overridable, fix linting … aa7f031 Signed-off-by: Nick Hill <[email protected]> Merge remote-tracking branch 'refs/remotes/origin/main' into v1-perf-… … e6fc61f …smoothing mgoin approved these changes Jan 22, 2025 View reviewed changes Copy link Member mgoin left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! I ran an lm-eval test with gsm8k as a smoke test and got the same result as v0 VLLM_USE_V1=1 vllm serve meta-llama/Llama-3.1-8B-Instruct --disable-log-requests --port 8000 --max-num-batched-tokens 8192 --no-enable-prefix-caching
lm_eval --model local-completions --model_args model=meta-llama/Llama-3.1-8B-Instruct,base_url=http://0.0.0.0:8000/v1/completions,num_concurrent=50,tokenized_requests=False --tasks gsm8k --num_fewshot 5
local-completions (model=meta-llama/Llama-3.1-8B-Instruct,base_url=http://0.0.0.0:8000/v1/completions,num_concurrent=50,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7718|± |0.0116|
| | |strict-match | 5|exact_match|↑ |0.6983|± |0.0126| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 WoosukKwon reacted with heart emoji All reactions ❤️ 1 reaction mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 22, 2025 Copy link mergify bot commented Jan 22, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @njhill . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jan 22, 2025 Merge remote-tracking branch 'origin/main' into v1-perf-smoothing … eafe7cb # Conflicts:
# vllm/envs.py mergify bot removed
the needs-rebase label Jan 22, 2025 mgoin enabled auto-merge (squash) January 22, 2025 22:18 Hide details View details mgoin merged commit aea9436 into vllm-project : main Jan 22, 2025 51 checks passed Uh oh! There was an error while loading. Please reload this page . njhill deleted the v1-perf-smoothing branch January 22, 2025 23:34 tjtanaa pushed a commit
to EmbeddedLLM/vllm
that referenced
this pull request Jan 28, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … d57c673 …12287 ) rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … f9304d2 …12287 ) Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … 1f63490 …12287 )
Signed-off-by: Isotr0py <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 3, 2025 [MFM-2025-02-03] Merge Main to llama fp8; With Faster ROCm Paged Atte… … 479b843 …ntion ( #399 )
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 5, 2025 [Bug Fix] Missing vllm.envs ( #405 ) … 87b3c56 * [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] included missing environment variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> NickLucche pushed a commit
to NickLucche/vllm
that referenced
this pull request Feb 7, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … 0048cc4 …12287 ) GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … a432d0d …12287 ) hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 19, 2025 [FEAT] [AITER] Support AITER operators: Fused MoE, Linear, Norm ( #436 ) … 4c8c86d * [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* Integrated ater: kvcache pa gemm rmsnorm
* fix pa
* fix
* replace topk softmax
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* replace fp moe kernel with aiter kernel
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* change ater to aiter
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config
* Applying scales rename to fp8 config ( #387 )
* Update Dockerfile.rocm
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Using aiter moe kernel
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* fix pa copy
* pa update
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* add fp16 pa support for aiter
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* aiter build instructions
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Copy to the right path
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Add env var toggles to disable AITER MoE or PA (both by default on)
* Update accuracy benchmark for batch size > 1
* Add a few more AITER toggles for norm and linear layers
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Public aiter repo
* Fail if aiter build failed silently
* Aiter can only be built on MI300x
* Typo fix
* Aiter PA off by default
* Changes to support updated aiter FP8 PA
* Support FP8 and INT8 KV cache according to ROCm/aiter#90 * add moe weight shuffle for dynamic quant and unquantized path
Signed-off-by: charlifu <[email protected]>
* Use FP16-native PA after support in ROCm/aiter#97 * Fix: Use FP8 pertoken quantize if KV cache dtype is FP8
* revert rocm_flash_attn.py line 883
* Don't enable by default to use an RC for main vllm-dev docker
* use ck moe for bf16 and fp16 fused_moe
* Merge remote-tracking branch 'origin/aiter_intergration_final' into merge-aiter-llama-fp8
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] include moe shuffle env variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: charlifu <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: amd-ruitang3 <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: chenjun <[email protected]>
Co-authored-by: ValarLip <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Matthew Wong <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: charlifu <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [Frontend][V1] Online serving performance improvements ( vllm-project#… … d7a090a …12287 ) Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:04
|
3127e975fb9417d10513e25b80820870f594c627
|
https://github.com/vllm-project/vllm/pull/12212
| false | true | true | true |
PERF: Throughput, Throughput, Throughput | SERVING: serving, serving, serving | TEST: test, test, test
|
Copy link Member DarkLight1337 commented Jan 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Running mypy on all target Python versions takes too long for local development. This PR reserves manual stage to be only run in pre-commit CI, and moves the mypy checks to manual stage. Meanwhile, a new commit hook is added to run mypy only on the current Python version. This hook is assigned to pre-commit stage so it is automatically run locally. This should make pre-commit take around the same time as the old format.sh . cc @hmellor Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Make pre-commit faster … 4d4bfa3 Signed-off-by: DarkLight1337 <[email protected]> DarkLight1337 requested a review
from youkaichao January 20, 2025 09:25 Copy link github-actions bot commented Jan 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the ci/build label Jan 20, 2025 youkaichao reviewed Jan 20, 2025 View reviewed changes .pre-commit-config.yaml @@ -1,3 +1,6 @@ default_stages : - pre-commit # Run locally - manual # Run in CI Copy link Member youkaichao Jan 20, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment stage name: manual or ci ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author DarkLight1337 Jan 20, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stage name is hardcoded: https://pre-commit.com/#confining-hooks-to-run-at-certain-stages I don't think we can change the name... Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link shahedy2276541 Jan 29, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment mostafa Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions youkaichao approved these changes Jan 20, 2025 View reviewed changes Copy link Member youkaichao left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment works for me, thanks for the improvement! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details youkaichao merged commit 3127e97 into vllm-project : main Jan 20, 2025 9 of 12 checks passed Uh oh! There was an error while loading. Please reload this page . DarkLight1337 deleted the pre-commit-fast branch January 20, 2025 09:39 Copy link Member hmellor commented Jan 20, 2025 This is a sensible solution while we are running mypy so many times (60 times across all 4 supported python versions). Once the repo confirms to mypy better we can revert to running all python versions which is only 4 runs of mypy (i.e. quicker than running 1 python version today) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . kzawora-intel mentioned this pull request Jan 21, 2025 Rebase 2025.01.21 HabanaAI/vllm-fork#714 Merged khluu mentioned this pull request Jan 21, 2025 [ci/lint] Add back default arg for pre-commit #12279 Merged kzawora-intel added a commit
to HabanaAI/vllm-fork
that referenced
this pull request Jan 28, 2025 Rebase 2025.01.21 ( #714 ) … c9db39b - **[Bugfix] Fix score api for missing max_model_len validation
( vllm-project#12119 )**
- **[Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )**
- **[AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )**
- **[torch.compile] disable logging when cache is disabled ( vllm-project#12043 )**
- **[misc] fix cross-node TP ( vllm-project#12166 )**
- **[AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )**
- **[core] further polish memory profiling ( vllm-project#12126 )**
- **[Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )**
- **[Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )**
- **[core] clean up executor class hierarchy between v1 and v0
( vllm-project#12171 )**
- **[Misc] Support register quantization method out-of-tree ( vllm-project#11969 )**
- **[V1] Collect env var for usage stats ( vllm-project#12115 )**
- **[BUGFIX] Move scores to float32 in case of running xgrammar on cpu
( vllm-project#12152 )**
- **[Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )**
- **[torch.compile] store inductor compiled Python file ( vllm-project#12182 )**
- **benchmark_serving support --served-model-name param ( vllm-project#12109 )**
- **[Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )**
- **[V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )**
- **[Model] Support for fairseq2 Llama ( vllm-project#11442 )**
- **[Bugfix] Fix num_heads value for simple connector when tp enabled
( vllm-project#12074 )**
- **[torch.compile] fix sym_tensor_indices ( vllm-project#12191 )**
- **Move linting to `pre-commit` ( vllm-project#11975 )**
- **[DOC] Fix typo in docstring and assert message ( vllm-project#12194 )**
- **[DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )**
- **[Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )**
- **[Model] Add Qwen2 PRM model support ( vllm-project#12202 )**
- **[Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )**
- **[misc] add placeholder format.sh ( vllm-project#12206 )**
- **[CI/Build] Remove dummy CI steps ( vllm-project#12208 )**
- **[CI/Build] Make pre-commit faster ( vllm-project#12212 )**
- **[Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )**
- **[misc] print a message to suggest how to bypass commit hooks
( vllm-project#12217 )**
- **[core][bugfix] configure env var during import vllm ( vllm-project#12209 )**
- **[V1] Remove `_get_cache_block_size` ( vllm-project#12214 )**
- **[Misc] Pass `attention` to impl backend ( vllm-project#12218 )**
- **[Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )**
- **[CI] Pass local python version explicitly to pre-commit mypy.sh
( vllm-project#12224 )**
- **[Misc] Update CODEOWNERS ( vllm-project#12229 )**
- **fix: update platform detection for M-series arm based MacBook
processors ( vllm-project#12227 )**
- **[misc] add cuda runtime version to usage data ( vllm-project#12190 )**
- **[bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )**
- **[Kernel] optimize moe_align_block_size for cuda graph and large
num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )**
- **Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )**
- **[AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )**
- **[BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64
( vllm-project#12230 )**
- **[ci/build] disable failed and flaky tests ( vllm-project#12240 )**
- **[Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )**
- **[Misc]Add BNB quantization for PaliGemmaForConditionalGeneration
( vllm-project#12237 )**
- **[Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )**
- **[Bugfix] Fix mm_limits access for merged multi-modal processor
( vllm-project#12252 )**
---------
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]> rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [CI/Build] Make pre-commit faster ( vllm-project#12212 ) … 1a6c0a5 Signed-off-by: DarkLight1337 <[email protected]> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [CI/Build] Make pre-commit faster ( vllm-project#12212 ) … 241dff2 Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 3, 2025 [MFM-2025-02-03] Merge Main to llama fp8; With Faster ROCm Paged Atte… … 479b843 …ntion ( #399 )
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 5, 2025 [Bug Fix] Missing vllm.envs ( #405 ) … 87b3c56 * [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* [Doc] Fix build from source and installation link in README.md ( vllm-project#12013 )
Signed-off-by: Yikun <[email protected]>
* Using list
* [Bugfix] Fix deepseekv3 gate bias error ( vllm-project#12002 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* [Docs] Add Sky Computing Lab to project intro ( vllm-project#12019 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [HPU][Bugfix] set_forward_context and CI test execution ( vllm-project#12014 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config ( #387 )
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Update Dockerfile.rocm
* [Bugfix]: inclucde the env variables required for running FastSyncLLM
Signed-off-by: vllmellm <[email protected]>
* fix pre-commit lint
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] included missing environment variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]> GWS0428 pushed a commit
to GWS0428/VARserve
that referenced
this pull request Feb 12, 2025 [CI/Build] Make pre-commit faster ( vllm-project#12212 ) … 6876c40 Signed-off-by: DarkLight1337 <[email protected]> hongxiayang added a commit
to ROCm/vllm
that referenced
this pull request Feb 19, 2025 [FEAT] [AITER] Support AITER operators: Fused MoE, Linear, Norm ( #436 ) … 4c8c86d * [Doc] Update Quantization Hardware Support Documentation ( vllm-project#12025 )
Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [HPU][misc] add comments for explanation ( vllm-project#12034 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix various bugs in multi-modal processor ( vllm-project#12031 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel] Revert the API change of Attention.forward ( vllm-project#12038 )
Signed-off-by: Chen Zhang <[email protected]>
* [Platform] Add output for Attention Backend ( vllm-project#11981 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention ( vllm-project#12040 )
Signed-off-by: Chen Zhang <[email protected]>
* Explain where the engine args go when using Docker ( vllm-project#12041 )
Signed-off-by: Harry Mellor <[email protected]>
* Docs lint
* [Doc]: Update the Json Example of the `Engine Arguments` document ( vllm-project#12045 )
* [Misc] Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping ( vllm-project#11924 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Support MulAndSilu ( vllm-project#11624 )
Signed-off-by: Jee Jee Li <[email protected]>
* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py ( vllm-project#12046 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Platform] move current_memory_usage() into platform ( vllm-project#11369 )
Signed-off-by: Shanshan Shen <[email protected]>
* [V1][BugFix] Fix edge case in VLM scheduling ( vllm-project#12065 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Add multipstep chunked-prefill support for FlashInfer ( vllm-project#10467 )
* [core] Turn off GPU communication overlap for Ray executor ( vllm-project#12051 )
Signed-off-by: Rui Qiao <[email protected]>
* [core] platform agnostic executor via collective_rpc ( vllm-project#11256 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Update examples to remove SparseAutoModelForCausalLM ( vllm-project#12062 )
Signed-off-by: Kyle Sayers <[email protected]>
* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager ( vllm-project#12003 )
* Fix: cases with empty sparsity config ( vllm-project#12057 )
Signed-off-by: Rahul Tuli <[email protected]>
* Type-fix: make execute_model output type optional ( vllm-project#12020 )
* [Platform] Do not raise error if _Backend is not found ( vllm-project#12023 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [Model]: Support internlm3 ( vllm-project#12037 )
* Misc: allow to use proxy in `HTTPConnection` ( vllm-project#12042 )
Signed-off-by: Yuan Zhou <[email protected]>
* [Misc][Quark] Upstream Quark format to VLLM ( vllm-project#10765 )
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Doc]: Update `OpenAI-Compatible Server` documents ( vllm-project#12082 )
* [Bugfix] use right truncation for non-generative tasks ( vllm-project#12050 )
Signed-off-by: Joe Runde <[email protected]>
* [V1][Core] Autotune encoder cache budget ( vllm-project#11895 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Fix _get_lora_device for HQQ marlin ( vllm-project#12090 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Allow hip sources to be directly included when compiling for rocm. ( vllm-project#12087 )
* [Core] Default to using per_token quantization for fp8 when cutlass is supported. ( vllm-project#8651 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Add documentation for specifying model architecture ( vllm-project#12105 )
* Various cosmetic/comment fixes ( vllm-project#12089 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 ( vllm-project#12067 )
Signed-off-by: Isotr0py <[email protected]>
* Support torchrun and SPMD-style offline inference ( vllm-project#12071 )
Signed-off-by: youkaichao <[email protected]>
* [core] LLM.collective_rpc interface and RLHF example ( vllm-project#12084 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix max image feature size for Llava-one-vision ( vllm-project#12104 )
Signed-off-by: Roger Wang <[email protected]>
* Enable user marker for vllm profiling ( #357 )
* Enable user marker for vllm profiling
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [misc] Add LoRA kernel micro benchmarks ( vllm-project#11579 )
* [Model] Add support for deepseek-vl2-tiny model ( vllm-project#12068 )
Signed-off-by: Isotr0py <[email protected]>
* Deepseek V3 support ( #364 )
* Changing the hard coded datatype to see if it's enough for the model to work
* Picking the upstrteam moe kernel version
* make upstream fix for v3 also works for rocm v2
* Conditional fnuz dtype
* Requantizing from fn to fnuz
* Requantizing moe as well
* Actually requantizing moe weights
* Conditional requantization and assert on padding in block quant
* Format
---------
Co-authored-by: charlifu <[email protected]>
* [Bugfix] Set enforce_eager automatically for mllama ( vllm-project#12127 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Fix a path bug in disaggregated prefill example script. ( vllm-project#12121 )
Signed-off-by: Kuntai Du <[email protected]>
* [CI]add genai-perf benchmark in nightly benchmark ( vllm-project#10704 )
Signed-off-by: Kunshang Ji <[email protected]>
* [Doc] Add instructions on using Podman when SELinux is active ( vllm-project#12136 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile ( vllm-project#12135 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] add more `is not None` check in VllmConfig.__post_init__ ( vllm-project#12138 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Add deepseek_vl2 chat template ( vllm-project#12143 )
Signed-off-by: Isotr0py <[email protected]>
* [ROCm][MoE] moe tuning support for rocm ( vllm-project#12049 )
Signed-off-by: Divakar Verma <[email protected]>
* [V1] Move more control of kv cache initialization from model_executor to EngineCore ( vllm-project#11960 )
Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Misc][LoRA] Improve the readability of LoRA error messages ( vllm-project#12102 )
Signed-off-by: Jee Jee Li <[email protected]>
* [CI/Build][CPU][Bugfix] Fix CPU CI ( vllm-project#12150 )
Signed-off-by: jiang1.li <[email protected]>
* [core] allow callable in collective_rpc ( vllm-project#12151 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix score api for missing max_model_len validation ( vllm-project#12119 )
Signed-off-by: Wallas Santos <[email protected]>
* [Bugfix] Mistral tokenizer encode accept list of str ( vllm-project#12149 )
Signed-off-by: Kunshang Ji <[email protected]>
* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant ( vllm-project#12134 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [torch.compile] disable logging when cache is disabled ( vllm-project#12043 )
Signed-off-by: youkaichao <[email protected]>
* [misc] fix cross-node TP ( vllm-project#12166 )
Signed-off-by: youkaichao <[email protected]>
* [AMD][CI/Build][Bugfix] use pytorch stale wheel ( vllm-project#12172 )
Signed-off-by: hongxyan <[email protected]>
* [core] further polish memory profiling ( vllm-project#12126 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Fix broken link in SECURITY.md ( vllm-project#12175 )
Signed-off-by: Russell Bryant <[email protected]>
* [Model] Port deepseek-vl2 processor, remove dependency ( vllm-project#12169 )
Signed-off-by: Isotr0py <[email protected]>
* [core] clean up executor class hierarchy between v1 and v0 ( vllm-project#12171 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Support register quantization method out-of-tree ( vllm-project#11969 )
* [V1] Collect env var for usage stats ( vllm-project#12115 )
* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu ( vllm-project#12152 )
Signed-off-by: Michal Adamczyk <[email protected]>
* [Bugfix] Fix multi-modal processors for transformers 4.48 ( vllm-project#12187 )
* [torch.compile] store inductor compiled Python file ( vllm-project#12182 )
Signed-off-by: youkaichao <[email protected]>
* benchmark_serving support --served-model-name param ( vllm-project#12109 )
Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add BNB support to GLM4-V model ( vllm-project#12184 )
Signed-off-by: Isotr0py <[email protected]>
* [V1] Add V1 support of Qwen2-VL ( vllm-project#12128 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Support for fairseq2 Llama ( vllm-project#11442 )
Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
* [Bugfix] Fix num_heads value for simple connector when tp enabled ( vllm-project#12074 )
Signed-off-by: Shangming Cai <[email protected]>
* [torch.compile] fix sym_tensor_indices ( vllm-project#12191 )
Signed-off-by: youkaichao <[email protected]>
* Move linting to `pre-commit` ( vllm-project#11975 )
Signed-off-by: Harry Mellor <[email protected]>
* [DOC] Fix typo in docstring and assert message ( vllm-project#12194 )
Signed-off-by: Yuan Tang <[email protected]>
* [DOC] Add missing docstring in LLMEngine.add_request() ( vllm-project#12195 )
Signed-off-by: Yuan Tang <[email protected]>
* [Bugfix] Fix incorrect types in LayerwiseProfileResults ( vllm-project#12196 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Add Qwen2 PRM model support ( vllm-project#12202 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] Interface for accessing model from `VllmRunner` ( vllm-project#10353 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] add placeholder format.sh ( vllm-project#12206 )
Signed-off-by: youkaichao <[email protected]>
* [CI/Build] Remove dummy CI steps ( vllm-project#12208 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Make pre-commit faster ( vllm-project#12212 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Upgrade Aria to transformers 4.48 ( vllm-project#12203 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] print a message to suggest how to bypass commit hooks ( vllm-project#12217 )
Signed-off-by: youkaichao <[email protected]>
* [core][bugfix] configure env var during import vllm ( vllm-project#12209 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Remove `_get_cache_block_size` ( vllm-project#12214 )
Signed-off-by: Chen Zhang <[email protected]>
* [Misc] Pass `attention` to impl backend ( vllm-project#12218 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] Fix `HfExampleModels.find_hf_info` ( vllm-project#12223 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] Pass local python version explicitly to pre-commit mypy.sh ( vllm-project#12224 )
Signed-off-by: Chen Zhang <[email protected]>
* Using ROCm6.3.1 base docker and building hipblas-common ( #366 )
* [Misc] Update CODEOWNERS ( vllm-project#12229 )
* fix: update platform detection for M-series arm based MacBook processors ( vllm-project#12227 )
Signed-off-by: isikhi <[email protected]>
* [misc] add cuda runtime version to usage data ( vllm-project#12190 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [bugfix] catch xgrammar unsupported array constraints ( vllm-project#12210 )
Signed-off-by: Jason Cheng <[email protected]>
* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) ( vllm-project#12222 )
Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add quantization and guided decoding CODEOWNERS ( vllm-project#12228 )
Signed-off-by: mgoin <[email protected]>
* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork ( vllm-project#11777 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 ( vllm-project#12230 )
Signed-off-by: NickLucche <[email protected]>
* [ci/build] disable failed and flaky tests ( vllm-project#12240 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` ( vllm-project#12244 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration ( vllm-project#12237 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Remove redundant TypeVar from base model ( vllm-project#12248 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix mm_limits access for merged multi-modal processor ( vllm-project#12252 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] transparent compilation with more logging ( vllm-project#12246 )
Signed-off-by: youkaichao <[email protected]>
* [V1][Bugfix] Fix data item ordering in mixed-modality inference ( vllm-project#12259 )
Signed-off-by: Roger Wang <[email protected]>
* Remove pytorch comments for outlines + compressed-tensors ( vllm-project#12260 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Platform] improve platforms getattr ( vllm-project#12264 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ci/build] update nightly torch for gh200 test ( vllm-project#12270 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] fix race condition that leads to wrong order of token returned ( vllm-project#10802 )
Signed-off-by: Jannis Schönleber <[email protected]>
* [Kernel] fix moe_align_block_size error condition ( vllm-project#12239 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types ( vllm-project#10907 )
Signed-off-by: rickyx <[email protected]>
* [Bugfix] Multi-sequence broken ( vllm-project#11898 )
Signed-off-by: Andy Lo <[email protected]>
* [Misc] Remove experimental dep from tracing.py ( vllm-project#12007 )
Signed-off-by: Adrian Cole <[email protected]>
* [Misc] Set default backend to SDPA for get_vit_attn_backend ( vllm-project#12235 )
Signed-off-by: wangxiyuan <[email protected]>
* [Core] Free CPU pinned memory on environment cleanup ( vllm-project#10477 )
* Update pre-commit.yml ( #374 )
* Update pre-commit.yml
* Reapplying missing format
* New codespell exclude location
---------
Co-authored-by: Kevin H. Luu <[email protected]>
* [bugfix] moe tuning. rm is_navi() ( vllm-project#12273 )
Signed-off-by: Divakar Verma <[email protected]>
* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes ( vllm-project#12277 )
Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose ( vllm-project#12281 )
Signed-off-by: Hongxia Yang <[email protected]>
* [VLM] Simplify post-processing of replacement info ( vllm-project#12269 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ci/lint] Add back default arg for pre-commit ( vllm-project#12279 )
Signed-off-by: kevin <[email protected]>
* [CI] add docker volume prune to neuron CI ( vllm-project#12291 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Ci/Build] Fix mypy errors on main ( vllm-project#12296 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` ( vllm-project#12288 )
Signed-off-by: Nick Hill <[email protected]>
* [core] separate builder init and builder prepare for each batch ( vllm-project#12253 )
Signed-off-by: youkaichao <[email protected]>
* [Build] update requirements of no-device ( vllm-project#12299 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Core] Support fully transparent sleep mode ( vllm-project#11743 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Avoid unnecessary tokenization ( vllm-project#12310 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model][Bugfix]: correct Aria model output ( vllm-project#12309 )
Signed-off-by: xffxff <[email protected]>
* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 ( vllm-project#12313 )
Signed-off-by: Roger Wang <[email protected]>
* [Doc] Add docs for prompt replacement ( vllm-project#12318 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Fix the error in the tip for the --lora-modules parameter ( vllm-project#12319 )
Signed-off-by: wangerxiao <[email protected]>
* [Misc] Improve the readability of BNB error messages ( vllm-project#12320 )
Signed-off-by: Jee Jee Li <[email protected]>
* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init ( #367 )
* switching detokenize flag to be False
* detokenize = False for benchmarks
* restoring default in main vllm code for detokenize
* removing extra spaces
* moving detokenize to flag
* adding support for token ids
---------
Co-authored-by: maleksan85 <[email protected]>
* [Bugfix] Fix HPU multiprocessing executor ( vllm-project#12167 )
Signed-off-by: Konrad Zawora <[email protected]>
* [Core] Support `reset_prefix_cache` ( vllm-project#12284 )
* [Frontend][V1] Online serving performance improvements ( vllm-project#12287 )
* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD ( vllm-project#12282 )
Signed-off-by: Randall Smith <[email protected]>
* FP8 FA fixes ( #381 )
* FP8 FA fixes
Summary:
Add missing clamp and fix reciprocal scale computation.
* linter
* Returning the use of the proper stream in allreduce ( #382 )
* [Bugfix] Fixing AMD LoRA CI test. ( vllm-project#12329 )
Signed-off-by: Alexei V. Ivanov <[email protected]>
* [Docs] Update FP8 KV Cache documentation ( vllm-project#12238 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Docs] Document vulnerability disclosure process ( vllm-project#12326 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1] Add `uncache_blocks` ( vllm-project#12333 )
* [doc] explain common errors around torch.compile ( vllm-project#12340 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update ( vllm-project#12338 )
Signed-off-by: zhenwei <[email protected]>
* [Bugfix] Fix k_proj's bias for whisper self attention ( vllm-project#12342 )
Signed-off-by: Isotr0py <[email protected]>
* [Kernel] Flash Attention 3 Support ( vllm-project#12093 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Doc] Troubleshooting errors during model inspection ( vllm-project#12351 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Simplify M-RoPE ( vllm-project#12352 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>
* [Bugfix] Fix broken internvl2 inference with v1 ( vllm-project#12360 )
Signed-off-by: Isotr0py <[email protected]>
* [core] add wake_up doc and some sanity check ( vllm-project#12361 )
Signed-off-by: youkaichao <[email protected]>
* [torch.compile] decouple compile sizes and cudagraph sizes ( vllm-project#12243 )
Signed-off-by: youkaichao <[email protected]>
* [FP8][Kernel] Dynamic kv cache scaling factors computation ( vllm-project#11906 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
* [TPU] Update TPU CI to use torchxla nightly on 20250122 ( vllm-project#12334 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Docs] Document Phi-4 support ( vllm-project#12362 )
Signed-off-by: Isotr0py <[email protected]>
* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order ( vllm-project#11528 )
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script ( vllm-project#12357 )
Signed-off-by: Junichi Sato <[email protected]>
* [Docs] Add meetup slides ( vllm-project#12345 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Using pytorch commit past the point when rowwise PR ( pytorch/pytorch#144432 ) was merged ( #384 )
* Integrated ater: kvcache pa gemm rmsnorm
* fix pa
* fix
* replace topk softmax
* [Docs] Update spec decode + structured output in compat matrix ( vllm-project#12373 )
Signed-off-by: Russell Bryant <[email protected]>
* replace fp moe kernel with aiter kernel
* [V1][Frontend] Coalesce bunched `RequestOutput`s ( vllm-project#12298 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* Set weights_only=True when using torch.load() ( vllm-project#12366 )
Signed-off-by: Russell Bryant <[email protected]>
* [Bugfix] Path join when building local path for S3 clone ( vllm-project#12353 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
* change ater to aiter
* Update compressed-tensors version ( vllm-project#12367 )
* [V1] Increase default batch size for H100/H200 ( vllm-project#12369 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [perf] fix perf regression from vllm-project#12253 ( vllm-project#12380 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Use VisionArena Dataset for VLM Benchmarking ( vllm-project#12389 )
Signed-off-by: Roger Wang <[email protected]>
* [ci/build] fix wheel size check ( vllm-project#12396 )
Signed-off-by: youkaichao <[email protected]>
* [Hardware][Gaudi][Doc] Add missing step in setup instructions ( vllm-project#12382 )
* [ci/build] sync default value for wheel size ( vllm-project#12398 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Enable proxy support in benchmark script ( vllm-project#12356 )
Signed-off-by: Junichi Sato <[email protected]>
* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build ( vllm-project#12375 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Applying scales rename to fp8 config
* Applying scales rename to fp8 config ( #387 )
* Update Dockerfile.rocm
* [Misc] Remove deprecated code ( vllm-project#12383 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). ( vllm-project#12405 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Using aiter moe kernel
* Dev-docker Documentation Updates ( #378 )
* Dev-docker Documentation Updates
Minor updates to several sections, with links to other documents where appropriate.
* Fix formatting of GEMM filename
* README cleanup
- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording
* Expanded sample commands for Latency and Throughput
* Fix markdown links
* Fix pre-commit errors
* Updates from review
Initial updates to incorporate feedback from a review session held with @t-parry * Update script args to match current recommendations
* Remove recommended max-num-seqs values for now
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
* [Bugfix][Kernel] Fix moe align block issue for mixtral ( vllm-project#12413 )
* [Bugfix] Fix BLIP-2 processing ( vllm-project#12412 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 ( vllm-project#12408 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Add FA2 support to ViT MHA layer ( vllm-project#12355 )
Signed-off-by: Isotr0py <[email protected]>
* [TPU][CI] Update torchxla version in requirement-tpu.txt ( vllm-project#12422 )
Signed-off-by: Siyuan Liu <[email protected]>
* [Misc][Bugfix] FA3 support to ViT MHA layer ( vllm-project#12435 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync ( vllm-project#12094 )
Signed-off-by: Keyun Tong <[email protected]>
* [V1][Bugfix] Fix assertion when mm hashing is turned off ( vllm-project#12439 )
Signed-off-by: Roger Wang <[email protected]>
* fix pa copy
* pa update
* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 ( vllm-project#12445 )
* [Frontend] generation_config.json for maximum tokens( vllm-project#12242 )
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 ( vllm-project#12417 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
* add fp16 pa support for aiter
* [Bugfix/CI] Fix broken kernels/test_mha.py ( vllm-project#12450 )
* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 ( vllm-project#12434 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Build/CI] Fix libcuda.so linkage ( vllm-project#12424 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Frontend] Rerank API (Jina- and Cohere-compatible API) ( vllm-project#12376 )
Signed-off-by: Kyle Mistele <[email protected]>
* [DOC] Add link to vLLM blog ( vllm-project#12460 )
Signed-off-by: Yuan Tang <[email protected]>
* [V1] Avoid list creation in input preparation ( vllm-project#12457 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Frontend] Support scores endpoint in run_batch ( vllm-project#12430 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix Granite 3.0 MoE model loading ( vllm-project#12446 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata ( vllm-project#12464 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][Minor] Minor optimizations for update_from_output ( vllm-project#12454 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix gpt2 GGUF inference ( vllm-project#12467 )
Signed-off-by: Isotr0py <[email protected]>
* aiter build instructions
* [Build] Only build 9.0a for scaled_mm and sparse kernels ( vllm-project#12339 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* Copy to the right path
* [V1][Metrics] Add initial Prometheus logger ( vllm-project#12416 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][CI/Test] Do basic test for top-p & top-k sampling ( vllm-project#12469 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [FlashInfer] Upgrade to 0.2.0 ( vllm-project#11194 )
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* Support FP8 FA from Quark format ( #388 )
* Support FP8 FA from Quark format
* Support FP8 FA from Quark format
* nit: update comment
* Direct call on ROCm
* 20250127 docs update ( #392 )
* updating code blocks
* typo
* updated manifest
* Including feedback
* whitespace
* Deepseek instructions
* hyperlink fix
* hyperlink fix
* updating what is new
* cpx update
* typo
* whitespace
* whitespace
* Add env var toggles to disable AITER MoE or PA (both by default on)
* Update accuracy benchmark for batch size > 1
* Add a few more AITER toggles for norm and linear layers
* Faster Custom Paged Attention kernels ( #372 )
* integrate new cpa kernel, update tests and benchmark
* added comments to mfma4 kernel
* further comments for mfma16 kernel
* clang-format
* Lint
* add flag for logits rtz conversion and disable by default
* lint
* [Bugfix]: Fix paged attention unit tests of #372 ( #389 )
* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and `csrc/rocm/attention.cu`.
* improve code documentation.
* lint
---------
Co-authored-by: vllmellm <[email protected]>
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>
* Using a more precise profiling on ROCm to properly account for weights padding ( #394 )
* Public aiter repo
* Fail if aiter build failed silently
* Aiter can only be built on MI300x
* Typo fix
* Aiter PA off by default
* Changes to support updated aiter FP8 PA
* Support FP8 and INT8 KV cache according to ROCm/aiter#90 * add moe weight shuffle for dynamic quant and unquantized path
Signed-off-by: charlifu <[email protected]>
* Use FP16-native PA after support in ROCm/aiter#97 * Fix: Use FP8 pertoken quantize if KV cache dtype is FP8
* revert rocm_flash_attn.py line 883
* Don't enable by default to use an RC for main vllm-dev docker
* use ck moe for bf16 and fp16 fused_moe
* Merge remote-tracking branch 'origin/aiter_intergration_final' into merge-aiter-llama-fp8
Signed-off-by: vllmellm <[email protected]>
* [Bugfix] include moe shuffle env variable
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: charlifu <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: amd-ruitang3 <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: chenjun <[email protected]>
Co-authored-by: ValarLip <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Matthew Wong <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: charlifu <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [CI/Build] Make pre-commit faster ( vllm-project#12212 ) … 5659268 Signed-off-by: DarkLight1337 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:09
|
310aca88c984983189a57f1b72e3b1dde89fb92f
|
https://github.com/vllm-project/vllm/pull/11870
| false | true | true | true |
PERF: latency, latency, latency | SERVING: Serving, serving, Serving | TEST: test, test, test
|
Copy link Member youkaichao commented Jan 9, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . fix the performance regression reported from #11744 (comment) on my local benchmark: python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-70B --load-format dummy --enforce-eager -tp 4 main branch: Avg latency: 2.945735554069203 seconds
10% percentile latency: 2.924619035271462 seconds
25% percentile latency: 2.937671729727299 seconds
50% percentile latency: 2.9460502695292234 seconds
75% percentile latency: 2.955668824230088 seconds
90% percentile latency: 2.9639973257959356 seconds
99% percentile latency: 2.979829666109872 seconds this PR: Avg latency: 2.851606635436959 seconds
10% percentile latency: 2.8231707043829375 seconds
25% percentile latency: 2.834942308269092 seconds
50% percentile latency: 2.85484445450129 seconds
75% percentile latency: 2.8674310567148495 seconds
90% percentile latency: 2.872856835933635 seconds
99% percentile latency: 2.875793117735884 seconds it can have 3% perf diff. Hopefully this can fix the perf regression observed in the benchmark: Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions youkaichao added 2 commits January 9, 2025 09:42 fix stream … f5b7d78 Signed-off-by: youkaichao <[email protected]> fix code … e16f595 Signed-off-by: youkaichao <[email protected]> youkaichao requested a review
from tlrmchlsmth January 9, 2025 02:00 Copy link github-actions bot commented Jan 9, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author youkaichao commented Jan 9, 2025 I find measuring the pure forward time makes more sense, it will not be affected by the scheduling, etc: VLLM_LOG_BATCHSIZE_INTERVAL=1 python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-70B --load-format dummy --enforce-eager -tp 4 main branch: Batchsize forward time stats (batchsize, count, median_time(ms)): [(8, 4998, 20.77), (256, 40, 28.99)] this PR: Batchsize forward time stats (batchsize, count, median_time(ms)): [(8, 5027, 20.45), (256, 40, 28.95)] The forward time for every step (batchsize 8) reduces from 20.77ms to 20.45ms. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth reviewed Jan 9, 2025 View reviewed changes vllm/utils.py Comment on lines +959 to +970 prev_set_stream = torch.cuda.set_stream _current_stream = None def _patched_set_stream(stream: torch.cuda.Stream) -> None: global _current_stream _current_stream = stream prev_set_stream(stream) torch.cuda.set_stream = _patched_set_stream Copy link Collaborator tlrmchlsmth Jan 9, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It looks like we're not using set_stream anywhere in the vllm codebase. Could you add a unit test for this to make sure it's exercised? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Jan 9, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment here we patch torch.cuda.set_stream to keep track of the current stream directly, so that we can avoid calling torch.cuda.current_stream() . I might be confused about how utils.current_stream() works though Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author youkaichao Jan 9, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment torch.cuda.graph will call it internally to switch streams. so any test cases with cudagraph + nccl will test the PR's code. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 tlrmchlsmth reacted with thumbs up emoji All reactions 👍 1 reaction tlrmchlsmth approved these changes Jan 9, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the fix! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions youkaichao enabled auto-merge (squash) January 9, 2025 03:40 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 9, 2025 Hide details View details youkaichao merged commit 310aca8 into vllm-project : main Jan 9, 2025 71 of 73 checks passed Uh oh! There was an error while loading. Please reload this page . youkaichao deleted the fix_current_stream branch January 9, 2025 07:37 gshtras added a commit
to ROCm/vllm
that referenced
this pull request Jan 14, 2025 Merge pull request #358 from ROCm/upstream_merge_25_01_13 … 5976f48 * [Bugfix][V1] Fix molmo text-only inputs ( vllm-project#11676 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Move attn_type to Attention.__init__() ( vllm-project#11690 )
Signed-off-by: Chen Zhang <[email protected]>
* [V1] Extend beyond image modality and support mixed-modality inference with Llava-OneVision ( vllm-project#11685 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix LLaVA-NeXT feature size precision error (for real) ( vllm-project#11772 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Future-proof Qwen2-Audio multi-modal processor ( vllm-project#11776 )
Signed-off-by: DarkLight1337 <[email protected]>
* [XPU] Make pp group initilized for pipeline-parallelism ( vllm-project#11648 )
Signed-off-by: yisheng <[email protected]>
* [Doc][3/N] Reorganize Serving section ( vllm-project#11766 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel][LoRA]Punica prefill kernels fusion ( vllm-project#11234 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Abatom <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
* [Bugfix] Update attention interface in `Whisper` ( vllm-project#11784 )
Signed-off-by: Roger Wang <[email protected]>
* [CI] Fix neuron CI and run offline tests ( vllm-project#11779 )
Signed-off-by: Liangfu Chen <[email protected]>
* fix init error for MessageQueue when n_local_reader is zero ( vllm-project#11768 )
* [Doc] Create a vulnerability management team ( vllm-project#9925 )
Signed-off-by: Russell Bryant <[email protected]>
* [CI][CPU] adding build number to docker image name ( vllm-project#11788 )
Signed-off-by: Yuan Zhou <[email protected]>
* [V1][Doc] Update V1 support for `LLaVa-NeXT-Video` ( vllm-project#11798 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation ( vllm-project#11800 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] add doc to explain how to use uv ( vllm-project#11773 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Support audio language models on V1 ( vllm-project#11733 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] update how pip can install nightly wheels ( vllm-project#11806 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add note to `gte-Qwen2` models ( vllm-project#11808 )
Signed-off-by: DarkLight1337 <[email protected]>
* [optimization] remove python function call for custom op ( vllm-project#11750 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] update the prefix for qwen2 ( vllm-project#11795 )
Co-authored-by: jiadi.jjd <[email protected]>
* [Doc]Add documentation for using EAGLE in vLLM ( vllm-project#11417 )
Signed-off-by: Sourashis Roy <[email protected]>
* [Bugfix] Significant performance drop on CPUs with --num-scheduler-steps > 1 ( vllm-project#11794 )
* [Doc] Group examples into categories ( vllm-project#11782 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix image input for Pixtral-HF ( vllm-project#11741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] sort torch profiler table by kernel timing ( vllm-project#11813 )
* Remove the duplicate imports of MultiModalKwargs and PlaceholderRange… ( vllm-project#11824 )
* Fixed docker build for ppc64le ( vllm-project#11518 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* [OpenVINO] Fixed Docker.openvino build ( vllm-project#11732 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [Bugfix] Add checks for LoRA and CPU offload ( vllm-project#11810 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Docs] reorganize sponsorship page ( vllm-project#11639 )
Signed-off-by: simon-mo <[email protected]>
* [Bug] Fix pickling of `ModelConfig` when RunAI Model Streamer is used ( vllm-project#11825 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] improve memory profiling ( vllm-project#11809 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [doc] update wheels url ( vllm-project#11830 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Update sponsor name: 'Novita' to 'Novita AI' ( vllm-project#11833 )
* [Hardware][Apple] Native support for macOS Apple Silicon ( vllm-project#11696 )
Signed-off-by: Wallas Santos <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [torch.compile] consider relevant code in compilation cache ( vllm-project#11614 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Reorganize profiling/processing-related code ( vllm-project#11812 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples into categories ( vllm-project#11840 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc][4/N] Reorganize API Reference ( vllm-project#11843 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build][Bugfix] Fix CPU CI image clean up ( vllm-project#11836 )
Signed-off-by: jiang1.li <[email protected]>
* [Bugfix][XPU] fix silu_and_mul ( vllm-project#11823 )
Signed-off-by: yan ma <[email protected]>
* [Misc] Move some model utils into vision file ( vllm-project#11848 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Expand Multimodal API Reference ( vllm-project#11852 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]add some explanations for BlockHashType ( vllm-project#11847 )
* [TPU][Quantization] TPU `W8A8` ( vllm-project#11785 )
Co-authored-by: Woosuk Kwon <[email protected]>
* [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
* [Docs] Add Google Cloud Meetup ( vllm-project#11864 )
* [CI] Turn on basic correctness tests for V1 ( vllm-project#10864 )
* treat do_lower_case in the same way as the sentence-transformers library ( vllm-project#11815 )
Signed-off-by: Max de Bayser <[email protected]>
* [Doc] Recommend uv and python 3.12 for quickstart guide ( vllm-project#11849 )
Signed-off-by: mgoin <[email protected]>
* [Misc] Move `print_*_once` from utils to logger ( vllm-project#11298 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
* [Doc] Intended links Python multiprocessing library ( vllm-project#11878 )
* [perf]fix current stream ( vllm-project#11870 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Override dunder methods of placeholder modules ( vllm-project#11882 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] fix beam search input errors and latency benchmark script ( vllm-project#11875 )
Signed-off-by: Ye Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
* [Doc] Add model development API Reference ( vllm-project#11884 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] Allow platform specify attention backend ( vllm-project#11609 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [ci]try to fix flaky multi-step tests ( vllm-project#11894 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Provide correct Pixtral-HF chat template ( vllm-project#11891 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Docs] Add Modal to deployment frameworks ( vllm-project#11907 )
* [Doc][5/N] Move Community and API Reference to the bottom ( vllm-project#11896 )
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
* [VLM] Enable tokenized inputs for merged multi-modal processor ( vllm-project#11900 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Show default pooling method in a table ( vllm-project#11904 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] Hide KV cache behind torch.compile boundary ( vllm-project#11677 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Validate lora adapters to avoid crashing server ( vllm-project#11727 )
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [BUGFIX] Fix `UnspecifiedPlatform` package name ( vllm-project#11916 )
Signed-off-by: Kunshang Ji <[email protected]>
* [ci] fix gh200 tests ( vllm-project#11919 )
Signed-off-by: youkaichao <[email protected]>
* [misc] remove python function call for custom activation op ( vllm-project#11885 )
Co-authored-by: youkaichao <[email protected]>
* [platform] support pytorch custom op pluggable ( vllm-project#11328 )
Signed-off-by: wangxiyuan <[email protected]>
* Replace "online inference" with "online serving" ( vllm-project#11923 )
Signed-off-by: Harry Mellor <[email protected]>
* [ci] Fix sampler tests ( vllm-project#11922 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] [1/N] Initial guide for merged multi-modal processor ( vllm-project#11925 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] support custom torch.compile backend key ( vllm-project#11318 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Doc] Rename offline inference examples ( vllm-project#11927 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Fix docstring in `get_ip` function ( vllm-project#11932 )
Signed-off-by: Kuntai Du <[email protected]>
* Doc fix in `benchmark_long_document_qa_throughput.py` ( vllm-project#11933 )
Signed-off-by: Kuntai Du <[email protected]>
* [Hardware][CPU] Support MOE models on x86 CPU ( vllm-project#11831 )
Signed-off-by: jiang1.li <[email protected]>
* [Misc] Clean up debug code in Deepseek-V3 ( vllm-project#11930 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Update benchmark_prefix_caching.py fixed example usage ( vllm-project#11920 )
Signed-off-by: Ren MinMin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
* [Bugfix] Check that number of images matches number of <|image|> tokens with mllama ( vllm-project#11939 )
Signed-off-by: Travis Johnson <[email protected]>
* [mypy] Fix mypy warnings in api_server.py ( vllm-project#11941 )
Signed-off-by: Fred Reiss <[email protected]>
* [ci] fix broken distributed-tests-4-gpus ( vllm-project#11937 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][SpecDecode] Adjust Eagle model architecture to align with intended design ( vllm-project#11672 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Bugfix] fused_experts_impl wrong compute type for float32 ( vllm-project#11921 )
Signed-off-by: shaochangxu.scx <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
* [CI/Build] Move model-specific multi-modal processing tests ( vllm-project#11934 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Basic guide for writing unit tests for new models ( vllm-project#11951 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix RobertaModel loading ( vllm-project#11940 )
Signed-off-by: NickLucche <[email protected]>
* [Model] Add cogagent model support vLLM ( vllm-project#11742 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* Using list
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Trying to make scales work with compileable attention
* Docs lint
---------
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]> hongxiayang pushed a commit
to ROCm/vllm
that referenced
this pull request Jan 15, 2025 [MFM-20250115] Merge from ROCm/main to llama_fp8 ( #360 ) … d9385b4 * [Misc] Move weights mapper ( vllm-project#11443 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile. Fixes vllm-project#9182 ( vllm-project#11435 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Automatic conversion of classification and reward models ( vllm-project#11469 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Unify VLLM_ENABLE_V1_MULTIPROCESSING handling in RayExecutor ( vllm-project#11472 )
* [Misc] Update disaggregation benchmark scripts and test logs ( vllm-project#11456 )
Signed-off-by: Jiaxin Shan <[email protected]>
* [Frontend] Enable decord to load video from base64 ( vllm-project#11492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Improve GitHub links ( vllm-project#11491 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Move some multimodal utils to modality-specific modules ( vllm-project#11494 )
Signed-off-by: DarkLight1337 <[email protected]>
* Mypy checking for vllm/compilation ( vllm-project#11496 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* [Misc][LoRA] Fix LoRA weight mapper ( vllm-project#11495 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Add `QVQ` and `QwQ` to the list of supported models ( vllm-project#11509 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler ( vllm-project#10681 )
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Model] Modify MolmoForCausalLM MLP ( vllm-project#11510 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Add placeholder module ( vllm-project#11501 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add video example to openai client for multimodal ( vllm-project#11521 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [1/N] API Server (Remove Proxy) ( vllm-project#11529 )
* [Model] [Quantization] Support deepseek_v3 w8a8 fp8 block-wise quantization ( vllm-project#11523 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
* [2/N] API Server: Avoid ulimit footgun ( vllm-project#11530 )
* Deepseek v3 ( vllm-project#11502 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
* [Docs] Document Deepseek V3 support ( vllm-project#11535 )
Signed-off-by: simon-mo <[email protected]>
* Update openai_compatible_server.md ( vllm-project#11536 )
Co-authored-by: Simon Mo <[email protected]>
* [V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling ( vllm-project#11394 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix yapf ( vllm-project#11538 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [CI] Fix broken CI ( vllm-project#11543 )
* [misc] fix typing ( vllm-project#11540 )
Signed-off-by: youkaichao <[email protected]>
* [V1][3/N] API Server: Reduce Task Switching + Handle Abort Properly ( vllm-project#11534 )
* [BugFix] Fix quantization for all other methods ( vllm-project#11547 )
* [Platform] Move model arch check to platform ( vllm-project#11503 )
Signed-off-by: Mengqing Cao <[email protected]>
* Update deploying_with_k8s.md with AMD ROCm GPU example ( vllm-project#11465 )
Signed-off-by: Alex He <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Fix TeleChat2ForCausalLM weights mapper ( vllm-project#11546 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Abstract the logic for reading and writing media content ( vllm-project#11527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add xgrammar in doc ( vllm-project#11549 )
Signed-off-by: ccjincong <[email protected]>
* [VLM] Support caching in merged multi-modal processor ( vllm-project#11396 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MODEL] LoRA support for Jamba model ( vllm-project#11209 )
Signed-off-by: Erez Schwartz <[email protected]>
* [Misc]Add BNB quantization for MolmoForCausalLM ( vllm-project#11551 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Improve BNB loader to handle mixture of sharded and merged weights with same suffix ( vllm-project#11566 )
Signed-off-by: Isotr0py <[email protected]>
* [Bugfix] Fix for ROCM compressed tensor support ( vllm-project#11561 )
* [Doc] Update mllama example based on official doc ( vllm-project#11567 )
Signed-off-by: Chen Zhang <[email protected]>
* [V1] [4/N] API Server: ZMQ/MP Utilities ( vllm-project#11541 )
* [Bugfix] Last token measurement fix ( vllm-project#11376 )
Signed-off-by: rajveerb <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Model] Support InternLM2 Reward models ( vllm-project#11571 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Model] Remove hardcoded image tokens ids from Pixtral ( vllm-project#11582 )
Signed-off-by: Roger Wang <[email protected]>
* [Hardware][AMD]: Replace HIPCC version with more precise ROCm version ( vllm-project#11515 )
Signed-off-by: hjwei <[email protected]>
* [V1][Minor] Set pin_memory=False for token_ids_cpu tensor ( vllm-project#11581 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Doc] Minor documentation fixes ( vllm-project#11580 )
Signed-off-by: DarkLight1337 <[email protected]>
* [bugfix] interleaving sliding window for cohere2 model ( vllm-project#11583 )
Signed-off-by: youkaichao <[email protected]>
* [V1] [5/N] API Server: unify `Detokenizer` and `EngineCore` input ( vllm-project#11545 )
Signed-off-by: [email protected] <[email protected]>
* [Doc] Convert list tables to MyST ( vllm-project#11594 )
Signed-off-by: DarkLight1337 <[email protected]>
* [v1][bugfix] fix cudagraph with inplace buffer assignment ( vllm-project#11596 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] KV cache transfer connector registry ( vllm-project#11481 )
Signed-off-by: KuntaiDu <[email protected]>
* Remove print statement in DeepseekScalingRotaryEmbedding ( vllm-project#11604 )
* [v1] fix compilation cache ( vllm-project#11598 )
Signed-off-by: youkaichao <[email protected]>
* [Docker] bump up neuron sdk v2.21 ( vllm-project#11593 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Build][Kernel] Update CUTLASS to v3.6.0 ( vllm-project#11607 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build][CPU] Fix CPU CI by lazy importing triton FP8 kernels ( vllm-project#11618 )
Signed-off-by: jiang1.li <[email protected]>
* [platforms] enable platform plugins ( vllm-project#11602 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Abstract out multi-modal data parsing in merged processor ( vllm-project#11620 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] [6/N] API Server: Better Shutdown ( vllm-project#11586 )
* [Bugfix] Validate and concatenate image embeddings in MiniCPMVBaseModel ( vllm-project#11631 )
* [benchmark] Remove dependency for H100 benchmark step ( vllm-project#11572 )
* [Model][LoRA]LoRA support added for MolmoForCausalLM ( vllm-project#11439 )
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix OpenAI parallel sampling when using xgrammar ( vllm-project#11637 )
Signed-off-by: mgoin <[email protected]>
* [Misc][LoRA] Support Rank Stabilized LoRA (RSLoRA) ( vllm-project#6909 )
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Move the _touch(computed_blocks) call in the allocate_slots method to after the check for allocating new blocks. ( vllm-project#11565 )
* [V1] Simpify vision block hash for prefix caching by removing offset from hash ( vllm-project#11646 )
* [V1][VLM] V1 support for selected single-image models. ( vllm-project#11632 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Benchmark] Add benchmark script for CPU offloading ( vllm-project#11533 )
Signed-off-by: ApostaC <[email protected]>
Co-authored-by: KuntaiDu <[email protected]>
* [Bugfix][Refactor] Unify model management in frontend ( vllm-project#11660 )
Signed-off-by: Joe Runde <[email protected]>
* [VLM] Add max-count checking in data parser for single image models ( vllm-project#11661 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Optimize Qwen2-VL LoRA test ( vllm-project#11663 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Replace space with - in the file names ( vllm-project#11667 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Fix typo ( vllm-project#11666 )
Signed-off-by: Kazuhiro Serizawa <[email protected]>
* [V1] Implement Cascade Attention ( vllm-project#11635 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [VLM] Move supported limits and max tokens to merged multi-modal processor ( vllm-project#11669 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM][Bugfix] Multi-modal processor compatible with V1 multi-input ( vllm-project#11674 )
Signed-off-by: DarkLight1337 <[email protected]>
* [mypy] Pass type checking in vllm/inputs ( vllm-project#11680 )
Signed-off-by: Tobias Pitters <[email protected]>
* [VLM] Merged multi-modal processor for LLaVA-NeXT ( vllm-project#11682 )
Signed-off-by: DarkLight1337 <[email protected]>
* According to vllm.EngineArgs, the name should be distributed_executor_backend ( vllm-project#11689 )
* [Bugfix] Free cross attention block table for preempted-for-recompute sequence group. ( vllm-project#10013 )
Signed-off-by: Kathy Yu <[email protected]>
* [V1][Minor] Optimize token_ids_cpu copy ( vllm-project#11692 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Change kv scaling factor by param json on nvidia gpu ( vllm-project#11688 )
Signed-off-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
* Resolve race conditions in Marlin kernel ( vllm-project#11493 )
Signed-off-by: wchen61 <[email protected]>
* [Misc] Minimum requirements for SageMaker compatibility ( vllm-project#11576 )
* Update default max_num_batch_tokens for chunked prefill ( vllm-project#11694 )
* [Bugfix] Check chain_speculative_sampling before calling it ( vllm-project#11673 )
Signed-off-by: Lu Fang <[email protected]>
* [perf-benchmark] Fix dependency for steps in benchmark pipeline ( vllm-project#11710 )
* [Model] Whisper model implementation ( vllm-project#11280 )
Co-authored-by: Aurick Qiao <[email protected]>
* [V1] Simplify Shutdown ( vllm-project#11659 )
* [Bugfix] Fix ColumnParallelLinearWithLoRA slice ( vllm-project#11708 )
Signed-off-by: ZincCat <[email protected]>
* [V1] Improve TP>1 Error Handling + Stack Trace ( vllm-project#11721 )
Co-authored-by: Tyler Michael Smith <[email protected]>
* [Misc]Add BNB quantization for Qwen2VL ( vllm-project#11719 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* Update requirements-tpu.txt to support python 3.9 and 3.11 ( vllm-project#11695 )
Signed-off-by: mgoin <[email protected]>
* [V1] Chore: cruft removal ( vllm-project#11724 )
* [V1] log GPU blocks num for MultiprocExecutor ( vllm-project#11656 )
* Update tool_calling.md ( vllm-project#11701 )
* Update bnb.md with example for OpenAI ( vllm-project#11718 )
* [V1] Add `RayExecutor` support for `AsyncLLM` (api server) ( vllm-project#11712 )
* [V1] Add kv cache utils tests. ( vllm-project#11513 )
Signed-off-by: xcnick <[email protected]>
* [Core][Bugfix] Use correct device to initialize GPU data during CUDA-graph-capture ( vllm-project#11233 )
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
* [VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision ( vllm-project#11717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix precision error in LLaVA-NeXT ( vllm-project#11735 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Remove unnecessary weight initialization logic ( vllm-project#11736 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Bugfix][V1] Fix test_kv_cache_utils.py ( vllm-project#11738 )
Signed-off-by: Jee Jee Li <[email protected]>
* [MISC] Replace c10::optional with std::optional ( vllm-project#11730 )
Signed-off-by: Lu Fang <[email protected]>
* [distributed] remove pynccl's redundant stream ( vllm-project#11744 )
* fix: [doc] fix typo ( vllm-project#11751 )
Co-authored-by: Lancer <[email protected]>
* [Frontend] Improve `StreamingResponse` Exception Handling ( vllm-project#11752 )
* [distributed] remove pynccl's redundant change_state ( vllm-project#11749 )
* [Doc] [1/N] Reorganize Getting Started section ( vllm-project#11645 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Remove block size constraint ( vllm-project#11723 )
* [V1] Add BlockTable class ( vllm-project#11693 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Fix typo for valid_tool_parses ( vllm-project#11753 )
Signed-off-by: Rui Qiao <[email protected]>
* [V1] Refactor get_executor_cls ( vllm-project#11754 )
* [mypy] Forward pass function type hints in lora ( vllm-project#11740 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* k8s-config: Update the secret to use stringData ( vllm-project#11679 )
Signed-off-by: Suraj Deshmukh <[email protected]>
* [VLM] Separate out profiling-related logic ( vllm-project#11746 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc][2/N] Reorganize Models and Usage sections ( vllm-project#11755 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix max image size for LLaVA-Onevision ( vllm-project#11769 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] explain how to add interleaving sliding window support ( vllm-project#11771 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][V1] Fix molmo text-only inputs ( vllm-project#11676 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Move attn_type to Attention.__init__() ( vllm-project#11690 )
Signed-off-by: Chen Zhang <[email protected]>
* format
* [V1] Extend beyond image modality and support mixed-modality inference with Llava-OneVision ( vllm-project#11685 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* deepseek overflow fix ( #349 )
* [Bugfix] Fix LLaVA-NeXT feature size precision error (for real) ( vllm-project#11772 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Future-proof Qwen2-Audio multi-modal processor ( vllm-project#11776 )
Signed-off-by: DarkLight1337 <[email protected]>
* [XPU] Make pp group initilized for pipeline-parallelism ( vllm-project#11648 )
Signed-off-by: yisheng <[email protected]>
* [Doc][3/N] Reorganize Serving section ( vllm-project#11766 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel][LoRA]Punica prefill kernels fusion ( vllm-project#11234 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Abatom <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
* [Bugfix] Update attention interface in `Whisper` ( vllm-project#11784 )
Signed-off-by: Roger Wang <[email protected]>
* [CI] Fix neuron CI and run offline tests ( vllm-project#11779 )
Signed-off-by: Liangfu Chen <[email protected]>
* fix init error for MessageQueue when n_local_reader is zero ( vllm-project#11768 )
* [Doc] Create a vulnerability management team ( vllm-project#9925 )
Signed-off-by: Russell Bryant <[email protected]>
* [CI][CPU] adding build number to docker image name ( vllm-project#11788 )
Signed-off-by: Yuan Zhou <[email protected]>
* [V1][Doc] Update V1 support for `LLaVa-NeXT-Video` ( vllm-project#11798 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation ( vllm-project#11800 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] add doc to explain how to use uv ( vllm-project#11773 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Support audio language models on V1 ( vllm-project#11733 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] update how pip can install nightly wheels ( vllm-project#11806 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add note to `gte-Qwen2` models ( vllm-project#11808 )
Signed-off-by: DarkLight1337 <[email protected]>
* [optimization] remove python function call for custom op ( vllm-project#11750 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] update the prefix for qwen2 ( vllm-project#11795 )
Co-authored-by: jiadi.jjd <[email protected]>
* [Doc]Add documentation for using EAGLE in vLLM ( vllm-project#11417 )
Signed-off-by: Sourashis Roy <[email protected]>
* [Bugfix] Significant performance drop on CPUs with --num-scheduler-steps > 1 ( vllm-project#11794 )
* [Doc] Group examples into categories ( vllm-project#11782 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix image input for Pixtral-HF ( vllm-project#11741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] sort torch profiler table by kernel timing ( vllm-project#11813 )
* Remove the duplicate imports of MultiModalKwargs and PlaceholderRange… ( vllm-project#11824 )
* Fixed docker build for ppc64le ( vllm-project#11518 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* [OpenVINO] Fixed Docker.openvino build ( vllm-project#11732 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [Bugfix] Add checks for LoRA and CPU offload ( vllm-project#11810 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Docs] reorganize sponsorship page ( vllm-project#11639 )
Signed-off-by: simon-mo <[email protected]>
* [Bug] Fix pickling of `ModelConfig` when RunAI Model Streamer is used ( vllm-project#11825 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] improve memory profiling ( vllm-project#11809 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [doc] update wheels url ( vllm-project#11830 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Update sponsor name: 'Novita' to 'Novita AI' ( vllm-project#11833 )
* [Hardware][Apple] Native support for macOS Apple Silicon ( vllm-project#11696 )
Signed-off-by: Wallas Santos <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [torch.compile] consider relevant code in compilation cache ( vllm-project#11614 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Reorganize profiling/processing-related code ( vllm-project#11812 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples into categories ( vllm-project#11840 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc][4/N] Reorganize API Reference ( vllm-project#11843 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build][Bugfix] Fix CPU CI image clean up ( vllm-project#11836 )
Signed-off-by: jiang1.li <[email protected]>
* [Bugfix][XPU] fix silu_and_mul ( vllm-project#11823 )
Signed-off-by: yan ma <[email protected]>
* [Misc] Move some model utils into vision file ( vllm-project#11848 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Expand Multimodal API Reference ( vllm-project#11852 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]add some explanations for BlockHashType ( vllm-project#11847 )
* [TPU][Quantization] TPU `W8A8` ( vllm-project#11785 )
Co-authored-by: Woosuk Kwon <[email protected]>
* [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
* [Docs] Add Google Cloud Meetup ( vllm-project#11864 )
* Revert nccl changes ( #351 )
* Revert "[distributed] remove pynccl's redundant change_state ( vllm-project#11749 )"
This reverts commit 9e764e7 .
* Revert "[distributed] remove pynccl's redundant stream ( vllm-project#11744 )"
This reverts commit 635b897 .
* [CI] Turn on basic correctness tests for V1 ( vllm-project#10864 )
* treat do_lower_case in the same way as the sentence-transformers library ( vllm-project#11815 )
Signed-off-by: Max de Bayser <[email protected]>
* [Doc] Recommend uv and python 3.12 for quickstart guide ( vllm-project#11849 )
Signed-off-by: mgoin <[email protected]>
* [Misc] Move `print_*_once` from utils to logger ( vllm-project#11298 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
* [Doc] Intended links Python multiprocessing library ( vllm-project#11878 )
* [perf]fix current stream ( vllm-project#11870 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Override dunder methods of placeholder modules ( vllm-project#11882 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] fix beam search input errors and latency benchmark script ( vllm-project#11875 )
Signed-off-by: Ye Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
* [Doc] Add model development API Reference ( vllm-project#11884 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] Allow platform specify attention backend ( vllm-project#11609 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [ci]try to fix flaky multi-step tests ( vllm-project#11894 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Provide correct Pixtral-HF chat template ( vllm-project#11891 )
Signed-off-by: DarkLight1337 <[email protected]>
* fp8 support ( #352 )
Co-authored-by: Yida Wu <[email protected]>
* [Docs] Add Modal to deployment frameworks ( vllm-project#11907 )
* [Doc][5/N] Move Community and API Reference to the bottom ( vllm-project#11896 )
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
* [VLM] Enable tokenized inputs for merged multi-modal processor ( vllm-project#11900 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Show default pooling method in a table ( vllm-project#11904 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] Hide KV cache behind torch.compile boundary ( vllm-project#11677 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Validate lora adapters to avoid crashing server ( vllm-project#11727 )
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [BUGFIX] Fix `UnspecifiedPlatform` package name ( vllm-project#11916 )
Signed-off-by: Kunshang Ji <[email protected]>
* [ci] fix gh200 tests ( vllm-project#11919 )
Signed-off-by: youkaichao <[email protected]>
* [misc] remove python function call for custom activation op ( vllm-project#11885 )
Co-authored-by: youkaichao <[email protected]>
* [platform] support pytorch custom op pluggable ( vllm-project#11328 )
Signed-off-by: wangxiyuan <[email protected]>
* Replace "online inference" with "online serving" ( vllm-project#11923 )
Signed-off-by: Harry Mellor <[email protected]>
* [ci] Fix sampler tests ( vllm-project#11922 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] [1/N] Initial guide for merged multi-modal processor ( vllm-project#11925 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] support custom torch.compile backend key ( vllm-project#11318 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Doc] Rename offline inference examples ( vllm-project#11927 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Fix docstring in `get_ip` function ( vllm-project#11932 )
Signed-off-by: Kuntai Du <[email protected]>
* Doc fix in `benchmark_long_document_qa_throughput.py` ( vllm-project#11933 )
Signed-off-by: Kuntai Du <[email protected]>
* [Hardware][CPU] Support MOE models on x86 CPU ( vllm-project#11831 )
Signed-off-by: jiang1.li <[email protected]>
* [Misc] Clean up debug code in Deepseek-V3 ( vllm-project#11930 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Update benchmark_prefix_caching.py fixed example usage ( vllm-project#11920 )
Signed-off-by: Ren MinMin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
* [Bugfix] Check that number of images matches number of <|image|> tokens with mllama ( vllm-project#11939 )
Signed-off-by: Travis Johnson <[email protected]>
* [mypy] Fix mypy warnings in api_server.py ( vllm-project#11941 )
Signed-off-by: Fred Reiss <[email protected]>
* [ci] fix broken distributed-tests-4-gpus ( vllm-project#11937 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][SpecDecode] Adjust Eagle model architecture to align with intended design ( vllm-project#11672 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Bugfix] fused_experts_impl wrong compute type for float32 ( vllm-project#11921 )
Signed-off-by: shaochangxu.scx <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
* [CI/Build] Move model-specific multi-modal processing tests ( vllm-project#11934 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Basic guide for writing unit tests for new models ( vllm-project#11951 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix RobertaModel loading ( vllm-project#11940 )
Signed-off-by: NickLucche <[email protected]>
* [Model] Add cogagent model support vLLM ( vllm-project#11742 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* Using list
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* Docs lint
* linter formatting bug fixes
* inherit config file updates under fused_moe from main branch.
* match tests for the MOE layers with main.
---------
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Jiaxin Shan <[email protected]>
Signed-off-by: lucast2021 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Alex He <[email protected]>
Signed-off-by: ccjincong <[email protected]>
Signed-off-by: Erez Schwartz <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: rajveerb <[email protected]>
Signed-off-by: hjwei <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Kazuhiro Serizawa <[email protected]>
Signed-off-by: Tobias Pitters <[email protected]>
Signed-off-by: Kathy Yu <[email protected]>
Signed-off-by: bjmsong <[email protected]>
Signed-off-by: wchen61 <[email protected]>
Signed-off-by: ZincCat <[email protected]>
Signed-off-by: xcnick <[email protected]>
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Suraj Deshmukh <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Jiaxin Shan <[email protected]>
Co-authored-by: Lucas Tucker <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: AlexHe99 <[email protected]>
Co-authored-by: Chen1022 <[email protected]>
Co-authored-by: ErezSC42 <[email protected]>
Co-authored-by: Selali <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Rajveer Bachkaniwala <[email protected]>
Co-authored-by: hj-wei <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: John Giorgi <[email protected]>
Co-authored-by: sakunkun <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Yihua Cheng <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Kazuhiro Serizawa <[email protected]>
Co-authored-by: Tobias Pitters <[email protected]>
Co-authored-by: Chunyang Wen <[email protected]>
Co-authored-by: Kathy Yu <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: wchen61 <[email protected]>
Co-authored-by: Nathan Azrak <[email protected]>
Co-authored-by: Sachin Varghese <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: ZincCat <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Hust_YangXian <[email protected]>
Co-authored-by: Alberto Ferrer <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: xcnick <[email protected]>
Co-authored-by: Yan Burman <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Suraj Deshmukh <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Yida Wu <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: vllmellm <[email protected]> rasmith pushed a commit
to rasmith/vllm
that referenced
this pull request Jan 30, 2025 [perf]fix current stream ( vllm-project#11870 ) … 9555dd4 Signed-off-by: youkaichao <[email protected]> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [perf]fix current stream ( vllm-project#11870 ) … 2ad182f Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Isotr0py <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [perf]fix current stream ( vllm-project#11870 ) … 9a981e1 Signed-off-by: youkaichao <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:12
|
526de822d501c792b051c864ba873a836d78d5bf
|
https://github.com/vllm-project/vllm/pull/11698
| false | true | true | true |
PERF: latency, latency, TPOT | SERVING: Serving, serving, Serving | TEST: test, test, test
|
Copy link Contributor rasmith commented Jan 3, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Use the heuristic from scaled_mm_c3x_sm90_int8_dispatch.cuh:116 to choose block-size for triton_scaled_mm instead of always using 32x32x32 for better performance. This results in average 2.8x speedup. I ran: python benchmarks/benchmark_latency.py --dtype bfloat16 --enable-chunked-prefill False --load-format dummy --batch-size BS --num-iters-warmup 2 --num-iters 5 --input-len INPUT_LEN --output-len OUTPUT_LEN --model MODEL where BS in [ 1 , 16 , 64 ] INPUT_LEN in [ 128 , 1024 , 2048 ] OUTPUT_LEN in [ 1 , 128 , 1024 ] MODEL in [ "Qwen2-7B-Instruct-quantized.w8a8" , "Phi-3-medium-128k-instruct-quantized.w8a8" , "Meta-Llama-3.1-8B-Instruct-quantized.w8a8" , "Mistral-7B-Instruct-v0.3-quantized.w8a8" ] to get this number. Here are a few samples for Qwen2-7B-Instruct-quantized.w8a8 with dtype = bfloat16 batch_size input_len output_len avg_latency_old avg_latency_new speedup 1 128 128 1.4206 0.9828 1.4453 1 1024 1024 11.4586 7.8414 1.4612 64 2048 128 14.2707 4.7842 2.9828 I uploaded the full CSV file for all of the models and configs. heuristic_speedups.csv Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Change defeault block size for triton_scaled_mm to 128 for 4-5x speedup … 5675c6b Signed-off-by: Randall Smith <[email protected]> Copy link github-actions bot commented Jan 3, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Jan 3, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . This is an impressive improvement! Could you also show comparisons for equal input len/output len workloads, preferably with low batchsize? This could regress the TPOT for small decode batches. It seems there is no tuning for this kernel at the moment, so maybe this could benefit from a simple heuristic for the extreme problem sizes or a few @triton.autotune configs for the blocksizes. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . rasmith added 2 commits January 3, 2025 16:41 Use heuristic based on cutlass_gemm_sm90_int8_dispatch … a45f569 Signed-off-by: Randall Smith <[email protected]> Use heuristic to pick block size for better performance across input/… … eb8126e …output/batch sizes
Signed-off-by: Randall Smith <[email protected]> rasmith changed the title [Kernel][Triton][AMD] Change default block size for triton_scaled_mm to 128 for 3-5x speedup [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup Jan 7, 2025 Copy link Contributor Author rasmith commented Jan 7, 2025 This is an impressive improvement! Could you also show comparisons for equal input len/output len workloads, preferably with low batchsize? This could regress the TPOT for small decode batches. It seems there is no tuning for this kernel at the moment, so maybe this could benefit from a simple heuristic for the extreme problem sizes or a few @triton.autotune configs for the blocksizes. @mgoin When just using 128x128x128 it gave better performance from some, but not all. So, I used the heuristic from here: https://github.com/rasmith/vllm/blob/187e32997cdc20bbed5c21d3cef2609ab8ed9080/csrc/quantization/cutlass_w8a8/scaled_mm_c3x_sm90_int8_dispatch.cuh#L116 . I ran across various models and configs and was able to get improvement for all of the configs I tried. Average speedup is ~ 2.8x. 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . rasmith changed the title [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models Jan 7, 2025 mgoin approved these changes Jan 8, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice work I appreciate the benchmarking, this is a clear win! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 8, 2025 mgoin enabled auto-merge (squash) January 8, 2025 18:57 Hide details View details mgoin merged commit 526de82 into vllm-project : main Jan 8, 2025 74 checks passed Uh oh! There was an error while loading. Please reload this page . gshtras added a commit
to ROCm/vllm
that referenced
this pull request Jan 14, 2025 Merge pull request #358 from ROCm/upstream_merge_25_01_13 … 5976f48 * [Bugfix][V1] Fix molmo text-only inputs ( vllm-project#11676 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Move attn_type to Attention.__init__() ( vllm-project#11690 )
Signed-off-by: Chen Zhang <[email protected]>
* [V1] Extend beyond image modality and support mixed-modality inference with Llava-OneVision ( vllm-project#11685 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix LLaVA-NeXT feature size precision error (for real) ( vllm-project#11772 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Future-proof Qwen2-Audio multi-modal processor ( vllm-project#11776 )
Signed-off-by: DarkLight1337 <[email protected]>
* [XPU] Make pp group initilized for pipeline-parallelism ( vllm-project#11648 )
Signed-off-by: yisheng <[email protected]>
* [Doc][3/N] Reorganize Serving section ( vllm-project#11766 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel][LoRA]Punica prefill kernels fusion ( vllm-project#11234 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Abatom <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
* [Bugfix] Update attention interface in `Whisper` ( vllm-project#11784 )
Signed-off-by: Roger Wang <[email protected]>
* [CI] Fix neuron CI and run offline tests ( vllm-project#11779 )
Signed-off-by: Liangfu Chen <[email protected]>
* fix init error for MessageQueue when n_local_reader is zero ( vllm-project#11768 )
* [Doc] Create a vulnerability management team ( vllm-project#9925 )
Signed-off-by: Russell Bryant <[email protected]>
* [CI][CPU] adding build number to docker image name ( vllm-project#11788 )
Signed-off-by: Yuan Zhou <[email protected]>
* [V1][Doc] Update V1 support for `LLaVa-NeXT-Video` ( vllm-project#11798 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation ( vllm-project#11800 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] add doc to explain how to use uv ( vllm-project#11773 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Support audio language models on V1 ( vllm-project#11733 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] update how pip can install nightly wheels ( vllm-project#11806 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add note to `gte-Qwen2` models ( vllm-project#11808 )
Signed-off-by: DarkLight1337 <[email protected]>
* [optimization] remove python function call for custom op ( vllm-project#11750 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] update the prefix for qwen2 ( vllm-project#11795 )
Co-authored-by: jiadi.jjd <[email protected]>
* [Doc]Add documentation for using EAGLE in vLLM ( vllm-project#11417 )
Signed-off-by: Sourashis Roy <[email protected]>
* [Bugfix] Significant performance drop on CPUs with --num-scheduler-steps > 1 ( vllm-project#11794 )
* [Doc] Group examples into categories ( vllm-project#11782 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix image input for Pixtral-HF ( vllm-project#11741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] sort torch profiler table by kernel timing ( vllm-project#11813 )
* Remove the duplicate imports of MultiModalKwargs and PlaceholderRange… ( vllm-project#11824 )
* Fixed docker build for ppc64le ( vllm-project#11518 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* [OpenVINO] Fixed Docker.openvino build ( vllm-project#11732 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [Bugfix] Add checks for LoRA and CPU offload ( vllm-project#11810 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Docs] reorganize sponsorship page ( vllm-project#11639 )
Signed-off-by: simon-mo <[email protected]>
* [Bug] Fix pickling of `ModelConfig` when RunAI Model Streamer is used ( vllm-project#11825 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] improve memory profiling ( vllm-project#11809 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [doc] update wheels url ( vllm-project#11830 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Update sponsor name: 'Novita' to 'Novita AI' ( vllm-project#11833 )
* [Hardware][Apple] Native support for macOS Apple Silicon ( vllm-project#11696 )
Signed-off-by: Wallas Santos <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [torch.compile] consider relevant code in compilation cache ( vllm-project#11614 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Reorganize profiling/processing-related code ( vllm-project#11812 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples into categories ( vllm-project#11840 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc][4/N] Reorganize API Reference ( vllm-project#11843 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build][Bugfix] Fix CPU CI image clean up ( vllm-project#11836 )
Signed-off-by: jiang1.li <[email protected]>
* [Bugfix][XPU] fix silu_and_mul ( vllm-project#11823 )
Signed-off-by: yan ma <[email protected]>
* [Misc] Move some model utils into vision file ( vllm-project#11848 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Expand Multimodal API Reference ( vllm-project#11852 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]add some explanations for BlockHashType ( vllm-project#11847 )
* [TPU][Quantization] TPU `W8A8` ( vllm-project#11785 )
Co-authored-by: Woosuk Kwon <[email protected]>
* [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
* [Docs] Add Google Cloud Meetup ( vllm-project#11864 )
* [CI] Turn on basic correctness tests for V1 ( vllm-project#10864 )
* treat do_lower_case in the same way as the sentence-transformers library ( vllm-project#11815 )
Signed-off-by: Max de Bayser <[email protected]>
* [Doc] Recommend uv and python 3.12 for quickstart guide ( vllm-project#11849 )
Signed-off-by: mgoin <[email protected]>
* [Misc] Move `print_*_once` from utils to logger ( vllm-project#11298 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
* [Doc] Intended links Python multiprocessing library ( vllm-project#11878 )
* [perf]fix current stream ( vllm-project#11870 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Override dunder methods of placeholder modules ( vllm-project#11882 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] fix beam search input errors and latency benchmark script ( vllm-project#11875 )
Signed-off-by: Ye Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
* [Doc] Add model development API Reference ( vllm-project#11884 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] Allow platform specify attention backend ( vllm-project#11609 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [ci]try to fix flaky multi-step tests ( vllm-project#11894 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Provide correct Pixtral-HF chat template ( vllm-project#11891 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Docs] Add Modal to deployment frameworks ( vllm-project#11907 )
* [Doc][5/N] Move Community and API Reference to the bottom ( vllm-project#11896 )
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
* [VLM] Enable tokenized inputs for merged multi-modal processor ( vllm-project#11900 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Show default pooling method in a table ( vllm-project#11904 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] Hide KV cache behind torch.compile boundary ( vllm-project#11677 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Validate lora adapters to avoid crashing server ( vllm-project#11727 )
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [BUGFIX] Fix `UnspecifiedPlatform` package name ( vllm-project#11916 )
Signed-off-by: Kunshang Ji <[email protected]>
* [ci] fix gh200 tests ( vllm-project#11919 )
Signed-off-by: youkaichao <[email protected]>
* [misc] remove python function call for custom activation op ( vllm-project#11885 )
Co-authored-by: youkaichao <[email protected]>
* [platform] support pytorch custom op pluggable ( vllm-project#11328 )
Signed-off-by: wangxiyuan <[email protected]>
* Replace "online inference" with "online serving" ( vllm-project#11923 )
Signed-off-by: Harry Mellor <[email protected]>
* [ci] Fix sampler tests ( vllm-project#11922 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] [1/N] Initial guide for merged multi-modal processor ( vllm-project#11925 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] support custom torch.compile backend key ( vllm-project#11318 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Doc] Rename offline inference examples ( vllm-project#11927 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Fix docstring in `get_ip` function ( vllm-project#11932 )
Signed-off-by: Kuntai Du <[email protected]>
* Doc fix in `benchmark_long_document_qa_throughput.py` ( vllm-project#11933 )
Signed-off-by: Kuntai Du <[email protected]>
* [Hardware][CPU] Support MOE models on x86 CPU ( vllm-project#11831 )
Signed-off-by: jiang1.li <[email protected]>
* [Misc] Clean up debug code in Deepseek-V3 ( vllm-project#11930 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Update benchmark_prefix_caching.py fixed example usage ( vllm-project#11920 )
Signed-off-by: Ren MinMin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
* [Bugfix] Check that number of images matches number of <|image|> tokens with mllama ( vllm-project#11939 )
Signed-off-by: Travis Johnson <[email protected]>
* [mypy] Fix mypy warnings in api_server.py ( vllm-project#11941 )
Signed-off-by: Fred Reiss <[email protected]>
* [ci] fix broken distributed-tests-4-gpus ( vllm-project#11937 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][SpecDecode] Adjust Eagle model architecture to align with intended design ( vllm-project#11672 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Bugfix] fused_experts_impl wrong compute type for float32 ( vllm-project#11921 )
Signed-off-by: shaochangxu.scx <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
* [CI/Build] Move model-specific multi-modal processing tests ( vllm-project#11934 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Basic guide for writing unit tests for new models ( vllm-project#11951 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix RobertaModel loading ( vllm-project#11940 )
Signed-off-by: NickLucche <[email protected]>
* [Model] Add cogagent model support vLLM ( vllm-project#11742 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* Using list
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Trying to make scales work with compileable attention
* Docs lint
---------
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]> hongxiayang pushed a commit
to ROCm/vllm
that referenced
this pull request Jan 15, 2025 [MFM-20250115] Merge from ROCm/main to llama_fp8 ( #360 ) … d9385b4 * [Misc] Move weights mapper ( vllm-project#11443 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile. Fixes vllm-project#9182 ( vllm-project#11435 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Automatic conversion of classification and reward models ( vllm-project#11469 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Unify VLLM_ENABLE_V1_MULTIPROCESSING handling in RayExecutor ( vllm-project#11472 )
* [Misc] Update disaggregation benchmark scripts and test logs ( vllm-project#11456 )
Signed-off-by: Jiaxin Shan <[email protected]>
* [Frontend] Enable decord to load video from base64 ( vllm-project#11492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Improve GitHub links ( vllm-project#11491 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Move some multimodal utils to modality-specific modules ( vllm-project#11494 )
Signed-off-by: DarkLight1337 <[email protected]>
* Mypy checking for vllm/compilation ( vllm-project#11496 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* [Misc][LoRA] Fix LoRA weight mapper ( vllm-project#11495 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Add `QVQ` and `QwQ` to the list of supported models ( vllm-project#11509 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler ( vllm-project#10681 )
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Model] Modify MolmoForCausalLM MLP ( vllm-project#11510 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Add placeholder module ( vllm-project#11501 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add video example to openai client for multimodal ( vllm-project#11521 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [1/N] API Server (Remove Proxy) ( vllm-project#11529 )
* [Model] [Quantization] Support deepseek_v3 w8a8 fp8 block-wise quantization ( vllm-project#11523 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
* [2/N] API Server: Avoid ulimit footgun ( vllm-project#11530 )
* Deepseek v3 ( vllm-project#11502 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
* [Docs] Document Deepseek V3 support ( vllm-project#11535 )
Signed-off-by: simon-mo <[email protected]>
* Update openai_compatible_server.md ( vllm-project#11536 )
Co-authored-by: Simon Mo <[email protected]>
* [V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling ( vllm-project#11394 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix yapf ( vllm-project#11538 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [CI] Fix broken CI ( vllm-project#11543 )
* [misc] fix typing ( vllm-project#11540 )
Signed-off-by: youkaichao <[email protected]>
* [V1][3/N] API Server: Reduce Task Switching + Handle Abort Properly ( vllm-project#11534 )
* [BugFix] Fix quantization for all other methods ( vllm-project#11547 )
* [Platform] Move model arch check to platform ( vllm-project#11503 )
Signed-off-by: Mengqing Cao <[email protected]>
* Update deploying_with_k8s.md with AMD ROCm GPU example ( vllm-project#11465 )
Signed-off-by: Alex He <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Fix TeleChat2ForCausalLM weights mapper ( vllm-project#11546 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Abstract the logic for reading and writing media content ( vllm-project#11527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add xgrammar in doc ( vllm-project#11549 )
Signed-off-by: ccjincong <[email protected]>
* [VLM] Support caching in merged multi-modal processor ( vllm-project#11396 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MODEL] LoRA support for Jamba model ( vllm-project#11209 )
Signed-off-by: Erez Schwartz <[email protected]>
* [Misc]Add BNB quantization for MolmoForCausalLM ( vllm-project#11551 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Improve BNB loader to handle mixture of sharded and merged weights with same suffix ( vllm-project#11566 )
Signed-off-by: Isotr0py <[email protected]>
* [Bugfix] Fix for ROCM compressed tensor support ( vllm-project#11561 )
* [Doc] Update mllama example based on official doc ( vllm-project#11567 )
Signed-off-by: Chen Zhang <[email protected]>
* [V1] [4/N] API Server: ZMQ/MP Utilities ( vllm-project#11541 )
* [Bugfix] Last token measurement fix ( vllm-project#11376 )
Signed-off-by: rajveerb <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Model] Support InternLM2 Reward models ( vllm-project#11571 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Model] Remove hardcoded image tokens ids from Pixtral ( vllm-project#11582 )
Signed-off-by: Roger Wang <[email protected]>
* [Hardware][AMD]: Replace HIPCC version with more precise ROCm version ( vllm-project#11515 )
Signed-off-by: hjwei <[email protected]>
* [V1][Minor] Set pin_memory=False for token_ids_cpu tensor ( vllm-project#11581 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Doc] Minor documentation fixes ( vllm-project#11580 )
Signed-off-by: DarkLight1337 <[email protected]>
* [bugfix] interleaving sliding window for cohere2 model ( vllm-project#11583 )
Signed-off-by: youkaichao <[email protected]>
* [V1] [5/N] API Server: unify `Detokenizer` and `EngineCore` input ( vllm-project#11545 )
Signed-off-by: [email protected] <[email protected]>
* [Doc] Convert list tables to MyST ( vllm-project#11594 )
Signed-off-by: DarkLight1337 <[email protected]>
* [v1][bugfix] fix cudagraph with inplace buffer assignment ( vllm-project#11596 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] KV cache transfer connector registry ( vllm-project#11481 )
Signed-off-by: KuntaiDu <[email protected]>
* Remove print statement in DeepseekScalingRotaryEmbedding ( vllm-project#11604 )
* [v1] fix compilation cache ( vllm-project#11598 )
Signed-off-by: youkaichao <[email protected]>
* [Docker] bump up neuron sdk v2.21 ( vllm-project#11593 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Build][Kernel] Update CUTLASS to v3.6.0 ( vllm-project#11607 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build][CPU] Fix CPU CI by lazy importing triton FP8 kernels ( vllm-project#11618 )
Signed-off-by: jiang1.li <[email protected]>
* [platforms] enable platform plugins ( vllm-project#11602 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Abstract out multi-modal data parsing in merged processor ( vllm-project#11620 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] [6/N] API Server: Better Shutdown ( vllm-project#11586 )
* [Bugfix] Validate and concatenate image embeddings in MiniCPMVBaseModel ( vllm-project#11631 )
* [benchmark] Remove dependency for H100 benchmark step ( vllm-project#11572 )
* [Model][LoRA]LoRA support added for MolmoForCausalLM ( vllm-project#11439 )
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix OpenAI parallel sampling when using xgrammar ( vllm-project#11637 )
Signed-off-by: mgoin <[email protected]>
* [Misc][LoRA] Support Rank Stabilized LoRA (RSLoRA) ( vllm-project#6909 )
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Move the _touch(computed_blocks) call in the allocate_slots method to after the check for allocating new blocks. ( vllm-project#11565 )
* [V1] Simpify vision block hash for prefix caching by removing offset from hash ( vllm-project#11646 )
* [V1][VLM] V1 support for selected single-image models. ( vllm-project#11632 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Benchmark] Add benchmark script for CPU offloading ( vllm-project#11533 )
Signed-off-by: ApostaC <[email protected]>
Co-authored-by: KuntaiDu <[email protected]>
* [Bugfix][Refactor] Unify model management in frontend ( vllm-project#11660 )
Signed-off-by: Joe Runde <[email protected]>
* [VLM] Add max-count checking in data parser for single image models ( vllm-project#11661 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Optimize Qwen2-VL LoRA test ( vllm-project#11663 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Replace space with - in the file names ( vllm-project#11667 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Fix typo ( vllm-project#11666 )
Signed-off-by: Kazuhiro Serizawa <[email protected]>
* [V1] Implement Cascade Attention ( vllm-project#11635 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [VLM] Move supported limits and max tokens to merged multi-modal processor ( vllm-project#11669 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM][Bugfix] Multi-modal processor compatible with V1 multi-input ( vllm-project#11674 )
Signed-off-by: DarkLight1337 <[email protected]>
* [mypy] Pass type checking in vllm/inputs ( vllm-project#11680 )
Signed-off-by: Tobias Pitters <[email protected]>
* [VLM] Merged multi-modal processor for LLaVA-NeXT ( vllm-project#11682 )
Signed-off-by: DarkLight1337 <[email protected]>
* According to vllm.EngineArgs, the name should be distributed_executor_backend ( vllm-project#11689 )
* [Bugfix] Free cross attention block table for preempted-for-recompute sequence group. ( vllm-project#10013 )
Signed-off-by: Kathy Yu <[email protected]>
* [V1][Minor] Optimize token_ids_cpu copy ( vllm-project#11692 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Change kv scaling factor by param json on nvidia gpu ( vllm-project#11688 )
Signed-off-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
* Resolve race conditions in Marlin kernel ( vllm-project#11493 )
Signed-off-by: wchen61 <[email protected]>
* [Misc] Minimum requirements for SageMaker compatibility ( vllm-project#11576 )
* Update default max_num_batch_tokens for chunked prefill ( vllm-project#11694 )
* [Bugfix] Check chain_speculative_sampling before calling it ( vllm-project#11673 )
Signed-off-by: Lu Fang <[email protected]>
* [perf-benchmark] Fix dependency for steps in benchmark pipeline ( vllm-project#11710 )
* [Model] Whisper model implementation ( vllm-project#11280 )
Co-authored-by: Aurick Qiao <[email protected]>
* [V1] Simplify Shutdown ( vllm-project#11659 )
* [Bugfix] Fix ColumnParallelLinearWithLoRA slice ( vllm-project#11708 )
Signed-off-by: ZincCat <[email protected]>
* [V1] Improve TP>1 Error Handling + Stack Trace ( vllm-project#11721 )
Co-authored-by: Tyler Michael Smith <[email protected]>
* [Misc]Add BNB quantization for Qwen2VL ( vllm-project#11719 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* Update requirements-tpu.txt to support python 3.9 and 3.11 ( vllm-project#11695 )
Signed-off-by: mgoin <[email protected]>
* [V1] Chore: cruft removal ( vllm-project#11724 )
* [V1] log GPU blocks num for MultiprocExecutor ( vllm-project#11656 )
* Update tool_calling.md ( vllm-project#11701 )
* Update bnb.md with example for OpenAI ( vllm-project#11718 )
* [V1] Add `RayExecutor` support for `AsyncLLM` (api server) ( vllm-project#11712 )
* [V1] Add kv cache utils tests. ( vllm-project#11513 )
Signed-off-by: xcnick <[email protected]>
* [Core][Bugfix] Use correct device to initialize GPU data during CUDA-graph-capture ( vllm-project#11233 )
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
* [VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision ( vllm-project#11717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix precision error in LLaVA-NeXT ( vllm-project#11735 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Remove unnecessary weight initialization logic ( vllm-project#11736 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Bugfix][V1] Fix test_kv_cache_utils.py ( vllm-project#11738 )
Signed-off-by: Jee Jee Li <[email protected]>
* [MISC] Replace c10::optional with std::optional ( vllm-project#11730 )
Signed-off-by: Lu Fang <[email protected]>
* [distributed] remove pynccl's redundant stream ( vllm-project#11744 )
* fix: [doc] fix typo ( vllm-project#11751 )
Co-authored-by: Lancer <[email protected]>
* [Frontend] Improve `StreamingResponse` Exception Handling ( vllm-project#11752 )
* [distributed] remove pynccl's redundant change_state ( vllm-project#11749 )
* [Doc] [1/N] Reorganize Getting Started section ( vllm-project#11645 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Remove block size constraint ( vllm-project#11723 )
* [V1] Add BlockTable class ( vllm-project#11693 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Fix typo for valid_tool_parses ( vllm-project#11753 )
Signed-off-by: Rui Qiao <[email protected]>
* [V1] Refactor get_executor_cls ( vllm-project#11754 )
* [mypy] Forward pass function type hints in lora ( vllm-project#11740 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* k8s-config: Update the secret to use stringData ( vllm-project#11679 )
Signed-off-by: Suraj Deshmukh <[email protected]>
* [VLM] Separate out profiling-related logic ( vllm-project#11746 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc][2/N] Reorganize Models and Usage sections ( vllm-project#11755 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix max image size for LLaVA-Onevision ( vllm-project#11769 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] explain how to add interleaving sliding window support ( vllm-project#11771 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][V1] Fix molmo text-only inputs ( vllm-project#11676 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Move attn_type to Attention.__init__() ( vllm-project#11690 )
Signed-off-by: Chen Zhang <[email protected]>
* format
* [V1] Extend beyond image modality and support mixed-modality inference with Llava-OneVision ( vllm-project#11685 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* deepseek overflow fix ( #349 )
* [Bugfix] Fix LLaVA-NeXT feature size precision error (for real) ( vllm-project#11772 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Future-proof Qwen2-Audio multi-modal processor ( vllm-project#11776 )
Signed-off-by: DarkLight1337 <[email protected]>
* [XPU] Make pp group initilized for pipeline-parallelism ( vllm-project#11648 )
Signed-off-by: yisheng <[email protected]>
* [Doc][3/N] Reorganize Serving section ( vllm-project#11766 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel][LoRA]Punica prefill kernels fusion ( vllm-project#11234 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Abatom <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
* [Bugfix] Update attention interface in `Whisper` ( vllm-project#11784 )
Signed-off-by: Roger Wang <[email protected]>
* [CI] Fix neuron CI and run offline tests ( vllm-project#11779 )
Signed-off-by: Liangfu Chen <[email protected]>
* fix init error for MessageQueue when n_local_reader is zero ( vllm-project#11768 )
* [Doc] Create a vulnerability management team ( vllm-project#9925 )
Signed-off-by: Russell Bryant <[email protected]>
* [CI][CPU] adding build number to docker image name ( vllm-project#11788 )
Signed-off-by: Yuan Zhou <[email protected]>
* [V1][Doc] Update V1 support for `LLaVa-NeXT-Video` ( vllm-project#11798 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation ( vllm-project#11800 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] add doc to explain how to use uv ( vllm-project#11773 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Support audio language models on V1 ( vllm-project#11733 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] update how pip can install nightly wheels ( vllm-project#11806 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add note to `gte-Qwen2` models ( vllm-project#11808 )
Signed-off-by: DarkLight1337 <[email protected]>
* [optimization] remove python function call for custom op ( vllm-project#11750 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] update the prefix for qwen2 ( vllm-project#11795 )
Co-authored-by: jiadi.jjd <[email protected]>
* [Doc]Add documentation for using EAGLE in vLLM ( vllm-project#11417 )
Signed-off-by: Sourashis Roy <[email protected]>
* [Bugfix] Significant performance drop on CPUs with --num-scheduler-steps > 1 ( vllm-project#11794 )
* [Doc] Group examples into categories ( vllm-project#11782 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix image input for Pixtral-HF ( vllm-project#11741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] sort torch profiler table by kernel timing ( vllm-project#11813 )
* Remove the duplicate imports of MultiModalKwargs and PlaceholderRange… ( vllm-project#11824 )
* Fixed docker build for ppc64le ( vllm-project#11518 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* [OpenVINO] Fixed Docker.openvino build ( vllm-project#11732 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [Bugfix] Add checks for LoRA and CPU offload ( vllm-project#11810 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Docs] reorganize sponsorship page ( vllm-project#11639 )
Signed-off-by: simon-mo <[email protected]>
* [Bug] Fix pickling of `ModelConfig` when RunAI Model Streamer is used ( vllm-project#11825 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] improve memory profiling ( vllm-project#11809 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [doc] update wheels url ( vllm-project#11830 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Update sponsor name: 'Novita' to 'Novita AI' ( vllm-project#11833 )
* [Hardware][Apple] Native support for macOS Apple Silicon ( vllm-project#11696 )
Signed-off-by: Wallas Santos <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [torch.compile] consider relevant code in compilation cache ( vllm-project#11614 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Reorganize profiling/processing-related code ( vllm-project#11812 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples into categories ( vllm-project#11840 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc][4/N] Reorganize API Reference ( vllm-project#11843 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build][Bugfix] Fix CPU CI image clean up ( vllm-project#11836 )
Signed-off-by: jiang1.li <[email protected]>
* [Bugfix][XPU] fix silu_and_mul ( vllm-project#11823 )
Signed-off-by: yan ma <[email protected]>
* [Misc] Move some model utils into vision file ( vllm-project#11848 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Expand Multimodal API Reference ( vllm-project#11852 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]add some explanations for BlockHashType ( vllm-project#11847 )
* [TPU][Quantization] TPU `W8A8` ( vllm-project#11785 )
Co-authored-by: Woosuk Kwon <[email protected]>
* [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
* [Docs] Add Google Cloud Meetup ( vllm-project#11864 )
* Revert nccl changes ( #351 )
* Revert "[distributed] remove pynccl's redundant change_state ( vllm-project#11749 )"
This reverts commit 9e764e7 .
* Revert "[distributed] remove pynccl's redundant stream ( vllm-project#11744 )"
This reverts commit 635b897 .
* [CI] Turn on basic correctness tests for V1 ( vllm-project#10864 )
* treat do_lower_case in the same way as the sentence-transformers library ( vllm-project#11815 )
Signed-off-by: Max de Bayser <[email protected]>
* [Doc] Recommend uv and python 3.12 for quickstart guide ( vllm-project#11849 )
Signed-off-by: mgoin <[email protected]>
* [Misc] Move `print_*_once` from utils to logger ( vllm-project#11298 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
* [Doc] Intended links Python multiprocessing library ( vllm-project#11878 )
* [perf]fix current stream ( vllm-project#11870 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Override dunder methods of placeholder modules ( vllm-project#11882 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] fix beam search input errors and latency benchmark script ( vllm-project#11875 )
Signed-off-by: Ye Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
* [Doc] Add model development API Reference ( vllm-project#11884 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] Allow platform specify attention backend ( vllm-project#11609 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [ci]try to fix flaky multi-step tests ( vllm-project#11894 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Provide correct Pixtral-HF chat template ( vllm-project#11891 )
Signed-off-by: DarkLight1337 <[email protected]>
* fp8 support ( #352 )
Co-authored-by: Yida Wu <[email protected]>
* [Docs] Add Modal to deployment frameworks ( vllm-project#11907 )
* [Doc][5/N] Move Community and API Reference to the bottom ( vllm-project#11896 )
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
* [VLM] Enable tokenized inputs for merged multi-modal processor ( vllm-project#11900 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Show default pooling method in a table ( vllm-project#11904 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] Hide KV cache behind torch.compile boundary ( vllm-project#11677 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Validate lora adapters to avoid crashing server ( vllm-project#11727 )
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [BUGFIX] Fix `UnspecifiedPlatform` package name ( vllm-project#11916 )
Signed-off-by: Kunshang Ji <[email protected]>
* [ci] fix gh200 tests ( vllm-project#11919 )
Signed-off-by: youkaichao <[email protected]>
* [misc] remove python function call for custom activation op ( vllm-project#11885 )
Co-authored-by: youkaichao <[email protected]>
* [platform] support pytorch custom op pluggable ( vllm-project#11328 )
Signed-off-by: wangxiyuan <[email protected]>
* Replace "online inference" with "online serving" ( vllm-project#11923 )
Signed-off-by: Harry Mellor <[email protected]>
* [ci] Fix sampler tests ( vllm-project#11922 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] [1/N] Initial guide for merged multi-modal processor ( vllm-project#11925 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] support custom torch.compile backend key ( vllm-project#11318 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Doc] Rename offline inference examples ( vllm-project#11927 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Fix docstring in `get_ip` function ( vllm-project#11932 )
Signed-off-by: Kuntai Du <[email protected]>
* Doc fix in `benchmark_long_document_qa_throughput.py` ( vllm-project#11933 )
Signed-off-by: Kuntai Du <[email protected]>
* [Hardware][CPU] Support MOE models on x86 CPU ( vllm-project#11831 )
Signed-off-by: jiang1.li <[email protected]>
* [Misc] Clean up debug code in Deepseek-V3 ( vllm-project#11930 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Update benchmark_prefix_caching.py fixed example usage ( vllm-project#11920 )
Signed-off-by: Ren MinMin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
* [Bugfix] Check that number of images matches number of <|image|> tokens with mllama ( vllm-project#11939 )
Signed-off-by: Travis Johnson <[email protected]>
* [mypy] Fix mypy warnings in api_server.py ( vllm-project#11941 )
Signed-off-by: Fred Reiss <[email protected]>
* [ci] fix broken distributed-tests-4-gpus ( vllm-project#11937 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][SpecDecode] Adjust Eagle model architecture to align with intended design ( vllm-project#11672 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Bugfix] fused_experts_impl wrong compute type for float32 ( vllm-project#11921 )
Signed-off-by: shaochangxu.scx <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
* [CI/Build] Move model-specific multi-modal processing tests ( vllm-project#11934 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Basic guide for writing unit tests for new models ( vllm-project#11951 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix RobertaModel loading ( vllm-project#11940 )
Signed-off-by: NickLucche <[email protected]>
* [Model] Add cogagent model support vLLM ( vllm-project#11742 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* Using list
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* Docs lint
* linter formatting bug fixes
* inherit config file updates under fused_moe from main branch.
* match tests for the MOE layers with main.
---------
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Jiaxin Shan <[email protected]>
Signed-off-by: lucast2021 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Alex He <[email protected]>
Signed-off-by: ccjincong <[email protected]>
Signed-off-by: Erez Schwartz <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: rajveerb <[email protected]>
Signed-off-by: hjwei <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Kazuhiro Serizawa <[email protected]>
Signed-off-by: Tobias Pitters <[email protected]>
Signed-off-by: Kathy Yu <[email protected]>
Signed-off-by: bjmsong <[email protected]>
Signed-off-by: wchen61 <[email protected]>
Signed-off-by: ZincCat <[email protected]>
Signed-off-by: xcnick <[email protected]>
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Suraj Deshmukh <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Jiaxin Shan <[email protected]>
Co-authored-by: Lucas Tucker <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: AlexHe99 <[email protected]>
Co-authored-by: Chen1022 <[email protected]>
Co-authored-by: ErezSC42 <[email protected]>
Co-authored-by: Selali <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Rajveer Bachkaniwala <[email protected]>
Co-authored-by: hj-wei <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: John Giorgi <[email protected]>
Co-authored-by: sakunkun <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Yihua Cheng <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Kazuhiro Serizawa <[email protected]>
Co-authored-by: Tobias Pitters <[email protected]>
Co-authored-by: Chunyang Wen <[email protected]>
Co-authored-by: Kathy Yu <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: wchen61 <[email protected]>
Co-authored-by: Nathan Azrak <[email protected]>
Co-authored-by: Sachin Varghese <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: ZincCat <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Hust_YangXian <[email protected]>
Co-authored-by: Alberto Ferrer <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: xcnick <[email protected]>
Co-authored-by: Yan Burman <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Suraj Deshmukh <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Yida Wu <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: vllmellm <[email protected]> Isotr0py pushed a commit
to Isotr0py/vllm
that referenced
this pull request Feb 2, 2025 [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup f… … c4e6079 …or int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Isotr0py <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup f… … 5d97676 …or int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:15
|
b55ed6ef8ab0dce7fb0f79ff292dafdb4d22610c
|
https://github.com/vllm-project/vllm/pull/11692
| false | true | true | true |
PERF: latency, optimization, speedup | SERVING: Serving, serving, API Server | TEST: test, test, test
|
Copy link Collaborator WoosukKwon commented Jan 2, 2025 Currently, we don't consider the actual lengths in copying rows of token_ids_cpu . This small PR optimizes it by tracking the actual lengths. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [V1][Minor] Optimize token_ids_cpu copy … 5ecf50a Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Jan 2, 2025 WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners January 2, 2025 16:43 Copy link github-actions bot commented Jan 2, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac approved these changes Jan 2, 2025 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin approved these changes Jan 2, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Clear improvement Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details mgoin merged commit b55ed6e into main Jan 2, 2025 65 of 66 checks passed Uh oh! There was an error while loading. Please reload this page . mgoin deleted the v1-token-ids branch January 2, 2025 19:05 hongxiayang pushed a commit
to ROCm/vllm
that referenced
this pull request Jan 15, 2025 [MFM-20250115] Merge from ROCm/main to llama_fp8 ( #360 ) … d9385b4 * [Misc] Move weights mapper ( vllm-project#11443 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix issues in CPU build Dockerfile. Fixes vllm-project#9182 ( vllm-project#11435 )
Signed-off-by: Yuan Tang <[email protected]>
* [Model] Automatic conversion of classification and reward models ( vllm-project#11469 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Unify VLLM_ENABLE_V1_MULTIPROCESSING handling in RayExecutor ( vllm-project#11472 )
* [Misc] Update disaggregation benchmark scripts and test logs ( vllm-project#11456 )
Signed-off-by: Jiaxin Shan <[email protected]>
* [Frontend] Enable decord to load video from base64 ( vllm-project#11492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Improve GitHub links ( vllm-project#11491 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Move some multimodal utils to modality-specific modules ( vllm-project#11494 )
Signed-off-by: DarkLight1337 <[email protected]>
* Mypy checking for vllm/compilation ( vllm-project#11496 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* [Misc][LoRA] Fix LoRA weight mapper ( vllm-project#11495 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Add `QVQ` and `QwQ` to the list of supported models ( vllm-project#11509 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler ( vllm-project#10681 )
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Model] Modify MolmoForCausalLM MLP ( vllm-project#11510 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Add placeholder module ( vllm-project#11501 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add video example to openai client for multimodal ( vllm-project#11521 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [1/N] API Server (Remove Proxy) ( vllm-project#11529 )
* [Model] [Quantization] Support deepseek_v3 w8a8 fp8 block-wise quantization ( vllm-project#11523 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
* [2/N] API Server: Avoid ulimit footgun ( vllm-project#11530 )
* Deepseek v3 ( vllm-project#11502 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
* [Docs] Document Deepseek V3 support ( vllm-project#11535 )
Signed-off-by: simon-mo <[email protected]>
* Update openai_compatible_server.md ( vllm-project#11536 )
Co-authored-by: Simon Mo <[email protected]>
* [V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling ( vllm-project#11394 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix yapf ( vllm-project#11538 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [CI] Fix broken CI ( vllm-project#11543 )
* [misc] fix typing ( vllm-project#11540 )
Signed-off-by: youkaichao <[email protected]>
* [V1][3/N] API Server: Reduce Task Switching + Handle Abort Properly ( vllm-project#11534 )
* [BugFix] Fix quantization for all other methods ( vllm-project#11547 )
* [Platform] Move model arch check to platform ( vllm-project#11503 )
Signed-off-by: Mengqing Cao <[email protected]>
* Update deploying_with_k8s.md with AMD ROCm GPU example ( vllm-project#11465 )
Signed-off-by: Alex He <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Fix TeleChat2ForCausalLM weights mapper ( vllm-project#11546 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Abstract the logic for reading and writing media content ( vllm-project#11527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add xgrammar in doc ( vllm-project#11549 )
Signed-off-by: ccjincong <[email protected]>
* [VLM] Support caching in merged multi-modal processor ( vllm-project#11396 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MODEL] LoRA support for Jamba model ( vllm-project#11209 )
Signed-off-by: Erez Schwartz <[email protected]>
* [Misc]Add BNB quantization for MolmoForCausalLM ( vllm-project#11551 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Improve BNB loader to handle mixture of sharded and merged weights with same suffix ( vllm-project#11566 )
Signed-off-by: Isotr0py <[email protected]>
* [Bugfix] Fix for ROCM compressed tensor support ( vllm-project#11561 )
* [Doc] Update mllama example based on official doc ( vllm-project#11567 )
Signed-off-by: Chen Zhang <[email protected]>
* [V1] [4/N] API Server: ZMQ/MP Utilities ( vllm-project#11541 )
* [Bugfix] Last token measurement fix ( vllm-project#11376 )
Signed-off-by: rajveerb <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Model] Support InternLM2 Reward models ( vllm-project#11571 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Model] Remove hardcoded image tokens ids from Pixtral ( vllm-project#11582 )
Signed-off-by: Roger Wang <[email protected]>
* [Hardware][AMD]: Replace HIPCC version with more precise ROCm version ( vllm-project#11515 )
Signed-off-by: hjwei <[email protected]>
* [V1][Minor] Set pin_memory=False for token_ids_cpu tensor ( vllm-project#11581 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Doc] Minor documentation fixes ( vllm-project#11580 )
Signed-off-by: DarkLight1337 <[email protected]>
* [bugfix] interleaving sliding window for cohere2 model ( vllm-project#11583 )
Signed-off-by: youkaichao <[email protected]>
* [V1] [5/N] API Server: unify `Detokenizer` and `EngineCore` input ( vllm-project#11545 )
Signed-off-by: [email protected] <[email protected]>
* [Doc] Convert list tables to MyST ( vllm-project#11594 )
Signed-off-by: DarkLight1337 <[email protected]>
* [v1][bugfix] fix cudagraph with inplace buffer assignment ( vllm-project#11596 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] KV cache transfer connector registry ( vllm-project#11481 )
Signed-off-by: KuntaiDu <[email protected]>
* Remove print statement in DeepseekScalingRotaryEmbedding ( vllm-project#11604 )
* [v1] fix compilation cache ( vllm-project#11598 )
Signed-off-by: youkaichao <[email protected]>
* [Docker] bump up neuron sdk v2.21 ( vllm-project#11593 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Build][Kernel] Update CUTLASS to v3.6.0 ( vllm-project#11607 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build][CPU] Fix CPU CI by lazy importing triton FP8 kernels ( vllm-project#11618 )
Signed-off-by: jiang1.li <[email protected]>
* [platforms] enable platform plugins ( vllm-project#11602 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Abstract out multi-modal data parsing in merged processor ( vllm-project#11620 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] [6/N] API Server: Better Shutdown ( vllm-project#11586 )
* [Bugfix] Validate and concatenate image embeddings in MiniCPMVBaseModel ( vllm-project#11631 )
* [benchmark] Remove dependency for H100 benchmark step ( vllm-project#11572 )
* [Model][LoRA]LoRA support added for MolmoForCausalLM ( vllm-project#11439 )
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Fix OpenAI parallel sampling when using xgrammar ( vllm-project#11637 )
Signed-off-by: mgoin <[email protected]>
* [Misc][LoRA] Support Rank Stabilized LoRA (RSLoRA) ( vllm-project#6909 )
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [Bugfix] Move the _touch(computed_blocks) call in the allocate_slots method to after the check for allocating new blocks. ( vllm-project#11565 )
* [V1] Simpify vision block hash for prefix caching by removing offset from hash ( vllm-project#11646 )
* [V1][VLM] V1 support for selected single-image models. ( vllm-project#11632 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Benchmark] Add benchmark script for CPU offloading ( vllm-project#11533 )
Signed-off-by: ApostaC <[email protected]>
Co-authored-by: KuntaiDu <[email protected]>
* [Bugfix][Refactor] Unify model management in frontend ( vllm-project#11660 )
Signed-off-by: Joe Runde <[email protected]>
* [VLM] Add max-count checking in data parser for single image models ( vllm-project#11661 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Optimize Qwen2-VL LoRA test ( vllm-project#11663 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Misc] Replace space with - in the file names ( vllm-project#11667 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Fix typo ( vllm-project#11666 )
Signed-off-by: Kazuhiro Serizawa <[email protected]>
* [V1] Implement Cascade Attention ( vllm-project#11635 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [VLM] Move supported limits and max tokens to merged multi-modal processor ( vllm-project#11669 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM][Bugfix] Multi-modal processor compatible with V1 multi-input ( vllm-project#11674 )
Signed-off-by: DarkLight1337 <[email protected]>
* [mypy] Pass type checking in vllm/inputs ( vllm-project#11680 )
Signed-off-by: Tobias Pitters <[email protected]>
* [VLM] Merged multi-modal processor for LLaVA-NeXT ( vllm-project#11682 )
Signed-off-by: DarkLight1337 <[email protected]>
* According to vllm.EngineArgs, the name should be distributed_executor_backend ( vllm-project#11689 )
* [Bugfix] Free cross attention block table for preempted-for-recompute sequence group. ( vllm-project#10013 )
Signed-off-by: Kathy Yu <[email protected]>
* [V1][Minor] Optimize token_ids_cpu copy ( vllm-project#11692 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Change kv scaling factor by param json on nvidia gpu ( vllm-project#11688 )
Signed-off-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
* Resolve race conditions in Marlin kernel ( vllm-project#11493 )
Signed-off-by: wchen61 <[email protected]>
* [Misc] Minimum requirements for SageMaker compatibility ( vllm-project#11576 )
* Update default max_num_batch_tokens for chunked prefill ( vllm-project#11694 )
* [Bugfix] Check chain_speculative_sampling before calling it ( vllm-project#11673 )
Signed-off-by: Lu Fang <[email protected]>
* [perf-benchmark] Fix dependency for steps in benchmark pipeline ( vllm-project#11710 )
* [Model] Whisper model implementation ( vllm-project#11280 )
Co-authored-by: Aurick Qiao <[email protected]>
* [V1] Simplify Shutdown ( vllm-project#11659 )
* [Bugfix] Fix ColumnParallelLinearWithLoRA slice ( vllm-project#11708 )
Signed-off-by: ZincCat <[email protected]>
* [V1] Improve TP>1 Error Handling + Stack Trace ( vllm-project#11721 )
Co-authored-by: Tyler Michael Smith <[email protected]>
* [Misc]Add BNB quantization for Qwen2VL ( vllm-project#11719 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* Update requirements-tpu.txt to support python 3.9 and 3.11 ( vllm-project#11695 )
Signed-off-by: mgoin <[email protected]>
* [V1] Chore: cruft removal ( vllm-project#11724 )
* [V1] log GPU blocks num for MultiprocExecutor ( vllm-project#11656 )
* Update tool_calling.md ( vllm-project#11701 )
* Update bnb.md with example for OpenAI ( vllm-project#11718 )
* [V1] Add `RayExecutor` support for `AsyncLLM` (api server) ( vllm-project#11712 )
* [V1] Add kv cache utils tests. ( vllm-project#11513 )
Signed-off-by: xcnick <[email protected]>
* [Core][Bugfix] Use correct device to initialize GPU data during CUDA-graph-capture ( vllm-project#11233 )
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
* [VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision ( vllm-project#11717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix precision error in LLaVA-NeXT ( vllm-project#11735 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Remove unnecessary weight initialization logic ( vllm-project#11736 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Bugfix][V1] Fix test_kv_cache_utils.py ( vllm-project#11738 )
Signed-off-by: Jee Jee Li <[email protected]>
* [MISC] Replace c10::optional with std::optional ( vllm-project#11730 )
Signed-off-by: Lu Fang <[email protected]>
* [distributed] remove pynccl's redundant stream ( vllm-project#11744 )
* fix: [doc] fix typo ( vllm-project#11751 )
Co-authored-by: Lancer <[email protected]>
* [Frontend] Improve `StreamingResponse` Exception Handling ( vllm-project#11752 )
* [distributed] remove pynccl's redundant change_state ( vllm-project#11749 )
* [Doc] [1/N] Reorganize Getting Started section ( vllm-project#11645 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Remove block size constraint ( vllm-project#11723 )
* [V1] Add BlockTable class ( vllm-project#11693 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Misc] Fix typo for valid_tool_parses ( vllm-project#11753 )
Signed-off-by: Rui Qiao <[email protected]>
* [V1] Refactor get_executor_cls ( vllm-project#11754 )
* [mypy] Forward pass function type hints in lora ( vllm-project#11740 )
Signed-off-by: lucast2021 <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
* k8s-config: Update the secret to use stringData ( vllm-project#11679 )
Signed-off-by: Suraj Deshmukh <[email protected]>
* [VLM] Separate out profiling-related logic ( vllm-project#11746 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc][2/N] Reorganize Models and Usage sections ( vllm-project#11755 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix max image size for LLaVA-Onevision ( vllm-project#11769 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] explain how to add interleaving sliding window support ( vllm-project#11771 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][V1] Fix molmo text-only inputs ( vllm-project#11676 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Kernel] Move attn_type to Attention.__init__() ( vllm-project#11690 )
Signed-off-by: Chen Zhang <[email protected]>
* format
* [V1] Extend beyond image modality and support mixed-modality inference with Llava-OneVision ( vllm-project#11685 )
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* deepseek overflow fix ( #349 )
* [Bugfix] Fix LLaVA-NeXT feature size precision error (for real) ( vllm-project#11772 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] Future-proof Qwen2-Audio multi-modal processor ( vllm-project#11776 )
Signed-off-by: DarkLight1337 <[email protected]>
* [XPU] Make pp group initilized for pipeline-parallelism ( vllm-project#11648 )
Signed-off-by: yisheng <[email protected]>
* [Doc][3/N] Reorganize Serving section ( vllm-project#11766 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Kernel][LoRA]Punica prefill kernels fusion ( vllm-project#11234 )
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Abatom <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
* [Bugfix] Update attention interface in `Whisper` ( vllm-project#11784 )
Signed-off-by: Roger Wang <[email protected]>
* [CI] Fix neuron CI and run offline tests ( vllm-project#11779 )
Signed-off-by: Liangfu Chen <[email protected]>
* fix init error for MessageQueue when n_local_reader is zero ( vllm-project#11768 )
* [Doc] Create a vulnerability management team ( vllm-project#9925 )
Signed-off-by: Russell Bryant <[email protected]>
* [CI][CPU] adding build number to docker image name ( vllm-project#11788 )
Signed-off-by: Yuan Zhou <[email protected]>
* [V1][Doc] Update V1 support for `LLaVa-NeXT-Video` ( vllm-project#11798 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation ( vllm-project#11800 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] add doc to explain how to use uv ( vllm-project#11773 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] Support audio language models on V1 ( vllm-project#11733 )
Signed-off-by: Roger Wang <[email protected]>
* [doc] update how pip can install nightly wheels ( vllm-project#11806 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add note to `gte-Qwen2` models ( vllm-project#11808 )
Signed-off-by: DarkLight1337 <[email protected]>
* [optimization] remove python function call for custom op ( vllm-project#11750 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] update the prefix for qwen2 ( vllm-project#11795 )
Co-authored-by: jiadi.jjd <[email protected]>
* [Doc]Add documentation for using EAGLE in vLLM ( vllm-project#11417 )
Signed-off-by: Sourashis Roy <[email protected]>
* [Bugfix] Significant performance drop on CPUs with --num-scheduler-steps > 1 ( vllm-project#11794 )
* [Doc] Group examples into categories ( vllm-project#11782 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix image input for Pixtral-HF ( vllm-project#11741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] sort torch profiler table by kernel timing ( vllm-project#11813 )
* Remove the duplicate imports of MultiModalKwargs and PlaceholderRange… ( vllm-project#11824 )
* Fixed docker build for ppc64le ( vllm-project#11518 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* [OpenVINO] Fixed Docker.openvino build ( vllm-project#11732 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [Bugfix] Add checks for LoRA and CPU offload ( vllm-project#11810 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Docs] reorganize sponsorship page ( vllm-project#11639 )
Signed-off-by: simon-mo <[email protected]>
* [Bug] Fix pickling of `ModelConfig` when RunAI Model Streamer is used ( vllm-project#11825 )
Signed-off-by: DarkLight1337 <[email protected]>
* [misc] improve memory profiling ( vllm-project#11809 )
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [doc] update wheels url ( vllm-project#11830 )
Signed-off-by: youkaichao <[email protected]>
* [Docs] Update sponsor name: 'Novita' to 'Novita AI' ( vllm-project#11833 )
* [Hardware][Apple] Native support for macOS Apple Silicon ( vllm-project#11696 )
Signed-off-by: Wallas Santos <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [torch.compile] consider relevant code in compilation cache ( vllm-project#11614 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Reorganize profiling/processing-related code ( vllm-project#11812 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples into categories ( vllm-project#11840 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc][4/N] Reorganize API Reference ( vllm-project#11843 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build][Bugfix] Fix CPU CI image clean up ( vllm-project#11836 )
Signed-off-by: jiang1.li <[email protected]>
* [Bugfix][XPU] fix silu_and_mul ( vllm-project#11823 )
Signed-off-by: yan ma <[email protected]>
* [Misc] Move some model utils into vision file ( vllm-project#11848 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Expand Multimodal API Reference ( vllm-project#11852 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc]add some explanations for BlockHashType ( vllm-project#11847 )
* [TPU][Quantization] TPU `W8A8` ( vllm-project#11785 )
Co-authored-by: Woosuk Kwon <[email protected]>
* [Kernel][Triton][AMD] Use block size heuristic for avg 2.8x speedup for int8 models ( vllm-project#11698 )
Signed-off-by: Randall Smith <[email protected]>
* [Docs] Add Google Cloud Meetup ( vllm-project#11864 )
* Revert nccl changes ( #351 )
* Revert "[distributed] remove pynccl's redundant change_state ( vllm-project#11749 )"
This reverts commit 9e764e7 .
* Revert "[distributed] remove pynccl's redundant stream ( vllm-project#11744 )"
This reverts commit 635b897 .
* [CI] Turn on basic correctness tests for V1 ( vllm-project#10864 )
* treat do_lower_case in the same way as the sentence-transformers library ( vllm-project#11815 )
Signed-off-by: Max de Bayser <[email protected]>
* [Doc] Recommend uv and python 3.12 for quickstart guide ( vllm-project#11849 )
Signed-off-by: mgoin <[email protected]>
* [Misc] Move `print_*_once` from utils to logger ( vllm-project#11298 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
* [Doc] Intended links Python multiprocessing library ( vllm-project#11878 )
* [perf]fix current stream ( vllm-project#11870 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Override dunder methods of placeholder modules ( vllm-project#11882 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] fix beam search input errors and latency benchmark script ( vllm-project#11875 )
Signed-off-by: Ye Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
* [Doc] Add model development API Reference ( vllm-project#11884 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] Allow platform specify attention backend ( vllm-project#11609 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
* [ci]try to fix flaky multi-step tests ( vllm-project#11894 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Provide correct Pixtral-HF chat template ( vllm-project#11891 )
Signed-off-by: DarkLight1337 <[email protected]>
* fp8 support ( #352 )
Co-authored-by: Yida Wu <[email protected]>
* [Docs] Add Modal to deployment frameworks ( vllm-project#11907 )
* [Doc][5/N] Move Community and API Reference to the bottom ( vllm-project#11896 )
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
* [VLM] Enable tokenized inputs for merged multi-modal processor ( vllm-project#11900 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Show default pooling method in a table ( vllm-project#11904 )
Signed-off-by: DarkLight1337 <[email protected]>
* [torch.compile] Hide KV cache behind torch.compile boundary ( vllm-project#11677 )
Signed-off-by: Chen Zhang <[email protected]>
* [Bugfix] Validate lora adapters to avoid crashing server ( vllm-project#11727 )
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
* [BUGFIX] Fix `UnspecifiedPlatform` package name ( vllm-project#11916 )
Signed-off-by: Kunshang Ji <[email protected]>
* [ci] fix gh200 tests ( vllm-project#11919 )
Signed-off-by: youkaichao <[email protected]>
* [misc] remove python function call for custom activation op ( vllm-project#11885 )
Co-authored-by: youkaichao <[email protected]>
* [platform] support pytorch custom op pluggable ( vllm-project#11328 )
Signed-off-by: wangxiyuan <[email protected]>
* Replace "online inference" with "online serving" ( vllm-project#11923 )
Signed-off-by: Harry Mellor <[email protected]>
* [ci] Fix sampler tests ( vllm-project#11922 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] [1/N] Initial guide for merged multi-modal processor ( vllm-project#11925 )
Signed-off-by: DarkLight1337 <[email protected]>
* [platform] support custom torch.compile backend key ( vllm-project#11318 )
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Doc] Rename offline inference examples ( vllm-project#11927 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Fix docstring in `get_ip` function ( vllm-project#11932 )
Signed-off-by: Kuntai Du <[email protected]>
* Doc fix in `benchmark_long_document_qa_throughput.py` ( vllm-project#11933 )
Signed-off-by: Kuntai Du <[email protected]>
* [Hardware][CPU] Support MOE models on x86 CPU ( vllm-project#11831 )
Signed-off-by: jiang1.li <[email protected]>
* [Misc] Clean up debug code in Deepseek-V3 ( vllm-project#11930 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Update benchmark_prefix_caching.py fixed example usage ( vllm-project#11920 )
Signed-off-by: Ren MinMin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
* [Bugfix] Check that number of images matches number of <|image|> tokens with mllama ( vllm-project#11939 )
Signed-off-by: Travis Johnson <[email protected]>
* [mypy] Fix mypy warnings in api_server.py ( vllm-project#11941 )
Signed-off-by: Fred Reiss <[email protected]>
* [ci] fix broken distributed-tests-4-gpus ( vllm-project#11937 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix][SpecDecode] Adjust Eagle model architecture to align with intended design ( vllm-project#11672 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Bugfix] fused_experts_impl wrong compute type for float32 ( vllm-project#11921 )
Signed-off-by: shaochangxu.scx <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
* [CI/Build] Move model-specific multi-modal processing tests ( vllm-project#11934 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Basic guide for writing unit tests for new models ( vllm-project#11951 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix RobertaModel loading ( vllm-project#11940 )
Signed-off-by: NickLucche <[email protected]>
* [Model] Add cogagent model support vLLM ( vllm-project#11742 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [V1] Avoid sending text prompt to core engine ( vllm-project#11963 )
Signed-off-by: Roger Wang <[email protected]>
* [CI/Build] Add markdown linter ( vllm-project#11857 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Model] Initialize support for Deepseek-VL2 models ( vllm-project#11578 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Hardware][CPU] Multi-LoRA implementation for the CPU backend ( vllm-project#11100 )
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Hardware][TPU] workaround fix for MoE on TPU ( vllm-project#11764 )
* [V1][Core][1/n] Logging and Metrics ( vllm-project#11962 )
Signed-off-by: [email protected] <[email protected]>
* [Model] Support GGUF models newly added in `transformers` 4.46.0 ( vllm-project#9685 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction ( vllm-project#11973 )
Signed-off-by: [email protected] <[email protected]>
* [MISC] fix typo in kv transfer send recv test ( vllm-project#11983 )
* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. ( vllm-project#11979 )
* [CI][Spec Decode] fix: broken test for EAGLE model ( vllm-project#11972 )
Signed-off-by: Sungjae Lee <[email protected]>
* [Misc] Fix Deepseek V2 fp8 kv-scale remapping ( vllm-project#11947 )
Signed-off-by: Yida Wu <[email protected]>
* [Misc]Minor Changes about Worker ( vllm-project#11555 )
Signed-off-by: Chenguang Li <[email protected]>
* [platform] add ray_device_key ( vllm-project#11948 )
Signed-off-by: youkaichao <[email protected]>
* Fix Max Token ID for Qwen-VL-Chat ( vllm-project#11980 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Kernel] unified_attention for Attention.forward ( vllm-project#11967 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc][V1] Update model implementation guide for V1 support ( vllm-project#11998 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Doc] Organise installation documentation into categories and tabs ( vllm-project#11935 )
Signed-off-by: Harry Mellor <[email protected]>
* [platform] add device_control env var ( vllm-project#12009 )
Signed-off-by: youkaichao <[email protected]>
* [Platform] Move get_punica_wrapper() function to Platform ( vllm-project#11516 )
Signed-off-by: Shanshan Shen <[email protected]>
* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function ( vllm-project#11982 )
Signed-off-by: elijah <[email protected]>
* Using list
* Revert "[misc] improve memory profiling ( vllm-project#11809 )"
This reverts commit 889e662 .
* Multi-lingual P3L ( #356 )
* Commiting the *multilingual* P3L test.
* Created a *multi-lingual* P3L test.
* Making ruff happy.
* .
* Added a reference to the language-scripture Confluence table.
* Typo fixing.
* Harmonizing naming.
* Fixing comments in the header.
---------
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Trying to make scales work with compileable attention
* Docs lint
* linter formatting bug fixes
* inherit config file updates under fused_moe from main branch.
* match tests for the MOE layers with main.
---------
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Jiaxin Shan <[email protected]>
Signed-off-by: lucast2021 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Alex He <[email protected]>
Signed-off-by: ccjincong <[email protected]>
Signed-off-by: Erez Schwartz <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: rajveerb <[email protected]>
Signed-off-by: hjwei <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: Matthias Vogler <[email protected]>
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Kazuhiro Serizawa <[email protected]>
Signed-off-by: Tobias Pitters <[email protected]>
Signed-off-by: Kathy Yu <[email protected]>
Signed-off-by: bjmsong <[email protected]>
Signed-off-by: wchen61 <[email protected]>
Signed-off-by: ZincCat <[email protected]>
Signed-off-by: xcnick <[email protected]>
Signed-off-by: Yan Burman <[email protected]>
Signed-off-by: Ido Asraff <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Suraj Deshmukh <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Jiaxin Shan <[email protected]>
Co-authored-by: Lucas Tucker <[email protected]>
Co-authored-by: lucast2021 <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: HandH1998 <[email protected]>
Co-authored-by: robertgshaw2-neuralmagic <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: AlexHe99 <[email protected]>
Co-authored-by: Chen1022 <[email protected]>
Co-authored-by: ErezSC42 <[email protected]>
Co-authored-by: Selali <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Rajveer Bachkaniwala <[email protected]>
Co-authored-by: hj-wei <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: Matthias Vogler <[email protected]>
Co-authored-by: John Giorgi <[email protected]>
Co-authored-by: sakunkun <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Yihua Cheng <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Kazuhiro Serizawa <[email protected]>
Co-authored-by: Tobias Pitters <[email protected]>
Co-authored-by: Chunyang Wen <[email protected]>
Co-authored-by: Kathy Yu <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: bjmsong <[email protected]>
Co-authored-by: wchen61 <[email protected]>
Co-authored-by: Nathan Azrak <[email protected]>
Co-authored-by: Sachin Varghese <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: Aurick Qiao <[email protected]>
Co-authored-by: ZincCat <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Hust_YangXian <[email protected]>
Co-authored-by: Alberto Ferrer <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: xcnick <[email protected]>
Co-authored-by: Yan Burman <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Lancer <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Suraj Deshmukh <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Yida Wu <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: vllmellm <[email protected]> mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [V1][Minor] Optimize token_ids_cpu copy ( vllm-project#11692 ) … b6d0272 Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:18
|
f26c4aeecba481ce1445be7a998b0b97460a13bb
|
https://github.com/vllm-project/vllm/pull/11275
| false | false | false | true |
TEST: test, CI, CI
|
Copy link Collaborator ruisearch42 commented Dec 18, 2024 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes ray worker initialization time. In the current code base, ray.get(worker.get_node_ip.remote()) is called for each worker right after we get its handle, and it takes ~3s. This call is expensive because when RayWorkerWrapper.remote() just returns, we get an actor handle, but the actor itself may not be fully initialized yet. At this time, any method call on the actor would need to wait for actor initialization to happen, which can take some time (~3s in this case). And since we are calling ray.get(worker.get_node_ip.remote()) in a serialized manner for each newly created actor handle, this time adds up. For example, when we have TP=4, this would take ~12 seconds. We optimize this by making ray.get(worker.get_node_ip.remote()) calls on all the actor handles after they are created. And since these run in parallel, the total time taken is ~3s. So for TP = 4, this reduces ~9 seconds. I tested the following command: python3 benchmarks/benchmark_latency.py --model meta-llama/Llama-3.1-8B-Instruct --tensor-parallel-size 4 --num-iters-warmup 5 --num-iters 20 --batch-size 8 --input-len 128 --output-len 256 --max-model-len 2048 --no-enable-prefix-caching --distributed-executor-backend ray Without this PR, _init_workers_ray takes ~18 seconds. And with it, it takes ~9 seconds. FIX #10283 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 jjyao reacted with thumbs up emoji All reactions 👍 1 reaction Copy link github-actions bot commented Dec 18, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ruisearch42 assigned comaniac Dec 18, 2024 comaniac approved these changes Dec 18, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/executor/ray_gpu_executor.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . ruisearch42 force-pushed the opt_ray_worker_init branch
from dfa2cb8 to 0f453a7 Compare December 18, 2024 01:54 ruisearch42 added
the ready ONLY add when PR is ready to merge/full CI is needed label Dec 18, 2024 ruisearch42 and others added 3 commits December 18, 2024 16:22 [Misc] Optimize ray worker initialization time … 30c4374 Signed-off-by: Rui Qiao <[email protected]> up … 294e710 Signed-off-by: Rui Qiao <[email protected]> Update vllm/executor/ray_gpu_executor.py … 8254b41 Co-authored-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]> ruisearch42 force-pushed the opt_ray_worker_init branch
from 0f453a7 to 8254b41 Compare December 18, 2024 16:22 comaniac enabled auto-merge (squash) December 18, 2024 16:28 up … 918f192 Signed-off-by: Rui Qiao <[email protected]> auto-merge was automatically disabled December 18, 2024 16:32 Head branch was pushed to by a user without write access youkaichao approved these changes Dec 19, 2024 View reviewed changes Copy link Member youkaichao left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment thanks for the fix! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 ruisearch42 reacted with thumbs up emoji All reactions 👍 1 reaction Hide details View details youkaichao merged commit f26c4ae into vllm-project : main Dec 19, 2024 54 checks passed Uh oh! There was an error while loading. Please reload this page . youkaichao reviewed Dec 19, 2024 View reviewed changes vllm/executor/ray_gpu_executor.py @@ -179,7 +188,7 @@ def sort_by_driver_then_worker_ip(worker): 3. Finally, if the work is on a node with smaller IP address, it should be placed first. """ ip = ray.get( worker .get_node_ip.remote()) ip = worker_to_ip[ worker ] Copy link Member youkaichao Dec 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @ruisearch42 this one looks concerning to me. we should change the tuple to sort, instead of using worker as the key. see the code from #11256 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author ruisearch42 Dec 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I see. Can you elaborate a bit on the concern? The pattern of using an external dict for sorting is not uncommon. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member youkaichao Dec 20, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment using an arbitrary python object as a key introduces quite unpredictable behavior and can have silent bugs. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member youkaichao Dec 20, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment it's not about using an external dict, it's about using the worker object as a dict key, which implicitly calls its __hash__ function. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author ruisearch42 Dec 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think the default behavior without a custom __hash__ function is to use the object's identity (memory address) as __hash__ and __eq__ , so it's pretty safe unless there is some non-standard user overridden __hash__ and __eq__ ? I think your implementation also makes sense. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions ruisearch42 mentioned this pull request Dec 20, 2024 [Bug]: extremely slow launching time possibly due to calling ray.init() again after it has already been called when launching vllm through ray cluster #11208 Closed 1 task mzusman pushed a commit
to mzusman/vllm
that referenced
this pull request Mar 12, 2025 [Misc] Optimize ray worker initialization time ( vllm-project#11275 ) … 073196d Signed-off-by: Rui Qiao <[email protected]>
Co-authored-by: Cody Yu <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:21
|
25ebed2f8ca6d747d63f2be9ede023c561851ac8
|
https://github.com/vllm-project/vllm/pull/11214
| false | false | false | true |
TEST: test, CI, CI
|
Copy link Collaborator WoosukKwon commented Dec 15, 2024 No description provided. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [V1][Minor] Cache np arange to reduce input preparation overhead … 0e1d13d Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Dec 15, 2024 WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners December 15, 2024 18:57 Copy link github-actions bot commented Dec 15, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details WoosukKwon merged commit 25ebed2 into main Dec 15, 2024 66 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-arange branch December 15, 2024 21:33 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:24
|
886936837ca89e5645bc1f71cc0e1492b65b1590
|
https://github.com/vllm-project/vllm/pull/7209
| false | true | false | true |
PERF: TTFT, TTFT, TTFT | TEST: test, test, test
|
Copy link Contributor llsj14 commented Aug 6, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . FIX #6923 Summary I discovered that the eviction logic with the OrderedDict free_table in Evictor V1 and V2 slows down overall performance (especially TTFT ) when using prefix caching mode. In some scenarios, utilizing prefix caching mode makes the system slower compared to when prefix caching is not used. The evict function is frequently called when allocating a new block, as no block is evicted until the block space is full in prefix caching mode. The eviction logic was slow because free_table is declared as an OrderedDict, which is a linked list, and it tries to find a block with content hash (Evictor V1) or block ID (Evictor V2) in this free_table. Utilizing a priority queue and lazy deletion helps find the block faster. Result Verification As shown in the following output, the block ID and content hash had the same value between the as-is and to-be states (which is expected). With this change, I could make the duration of the evict function much faster. ===============================
evicted_block_id compare: 12010 12010
content_hash_compare: -7334740008364413937 -7334740008364413937
as-is evict duration: 7.0807114243507385 ms
to-be evict duration: 0.012848526239395142 ms
===============================
evicted_block_id compare: 12038 12038
content_hash_compare: -7008894356950570757 -7008894356950570757
as-is evict duration: 7.1028973907232285 ms
to-be evict duration: 0.008581206202507019 ms
=============================== Performance I checked the TTFT performance using llmperf and the Llama3-8B model with an A100 GPU. I benchmarked with 1536 input token length (512 same prefix + 1024 random input) and 512 output token length. By applying this commit, I can make the system faster while utilizing prefix caching. The speed-up metric is calculated based on the performance without prefix caching mode. as-is Model Num Clients Block Manager Prefix Caching TTFT (mean) Speed Up Llama3-8B 16 v2 X 841 ms Llama3-8B 32 v2 X 1441 ms Llama3-8B 64 v2 X 2619 ms Llama3-8B 128 v2 X 4729 ms Llama3-8B 16 v2 O 1962 ms 0.43 (slowed down) Llama3-8B 32 v2 O 8382 ms 0.17 (slowed down) Llama3-8B 64 v2 O 12665 ms 0.21 (slowed down) Llama3-8B 128 v2 O 22439 ms 0.21 (slowed down) to-be Model Num Clients Block Manager Prefix Caching TTFT (mean) Speed Up Llama3-8B 16 v2 O 541 ms 1.55 Llama3-8B 32 v2 O 901 ms 1.60 Llama3-8B 64 v2 O 1563 ms 1.68 Llama3-8B 128 v2 O 2947 ms 1.60 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 4 robertgshaw2-redhat, appleeji, jeongin601, and MonadKai reacted with thumbs up emoji 🎉 2 jeongin601 and nickandbro reacted with hooray emoji All reactions 👍 4 reactions 🎉 2 reactions Copy link github-actions bot commented Aug 6, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 6, 2024 thanks for the contribution! cc @alexm-neuralmagic @cadedaniel for block manager related optimization. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 reviewed Aug 6, 2024 View reviewed changes vllm/core/evictor_v2.py Outdated def update(self, block_id: int, last_accessed: float): self.free_table[block_id].last_accessed = last_accessed def _cleanup_if_necessary(self): if len(self.priority_queue) > 50 * len(self.free_table): Copy link Collaborator Yard1 Aug 6, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment that 50 constant should be a defined global. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author llsj14 Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 , thank you for your comments. I have fixed the issue and rebased my code. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Yard1 commented Aug 6, 2024 FYI this PR seems to be optimizing the same path #7193 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator cadedaniel commented Aug 6, 2024 At high level these fixes look great, will need evictor folks to review with more detail (sorry for second ping @robertgshaw2-neuralmagic ) ❤️ 1 llsj14 reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator robertgshaw2-redhat commented Aug 7, 2024 At high level these fixes look great, will need evictor folks to review with more detail (sorry for second ping @robertgshaw2-neuralmagic ) Thanks, Alex is going to take a look from out side, since he most recently has been in this codepath optimizing BMv2 ❤️ 2 cadedaniel and llsj14 reacted with heart emoji All reactions ❤️ 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . llsj14 force-pushed the feat/optimize-evict branch
from 8071838 to 95495a7 Compare August 7, 2024 00:05 alexm-redhat reviewed Aug 7, 2024 View reviewed changes Copy link Collaborator alexm-redhat left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for revealing this bottleneck and fixing it! It is a good idea to use a heap + dict to quickly access an LRU item. Left some minor comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction vllm/core/evictor_v2.py Outdated def add(self, block_id: int, content_hash: int, num_hashed_tokens: int, last_accessed: float): self.free_table[block_id] = BlockMetaData(content_hash, num_hashed_tokens, last_accessed) heapq.heappush( self.priority_queue, (last_accessed, -num_hashed_tokens, content_hash, block_id)) Copy link Collaborator alexm-redhat Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice trick with the -num_hashed_tokens to provide heap sorting. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/evictor_v2.py Outdated heapq.heappush( self.priority_queue, (last_accessed, -num_hashed_tokens, content_hash, block_id)) self._cleanup_if_necessary() Copy link Collaborator alexm-redhat Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Why it was necessary to delay the cleanup? Did you find it to be too slow? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author llsj14 Aug 7, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The reason I applied lazy deletion and event triggered cleanup is that searching specific block and deleting outdated blocks from the heap is O(log n) . Thus, I skip and pop outdated blocks by checking the free_table in eviction operation, and only clean up the priority queue when it consumes too much memory with outdated blocks. Since cleanup itself is O(n log n) , calling the cleanup function every time would make the system too slow. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author llsj14 Aug 7, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The ideal scenario is when the cleanup function is not needed, as outdated blocks are naturally popped out during the eviction operation. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author llsj14 Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @alexm-neuralmagic, thanks to your comment, I fixed the data type mistake and optimized the performance of the cleanup operation. I used only the free_table and heapify to create a new priority queue, achieving O(n) complexity. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/evictor_v2.py Outdated @@ -76,7 +79,8 @@ class LRUEvictor(Evictor): """ def __init__(self): self.free_table: OrderedDict [int, BlockMetaData] = OrderedDict() self.free_table: Dict [int, BlockMetaData] = {} Copy link Collaborator alexm-redhat Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Dict is definitely faster here Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/evictor_v2.py Outdated from typing import OrderedDict, Tuple from typing import Dict, List, Tuple CLEANUP_THRESHOLD = 50 Copy link Collaborator alexm-redhat Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I would make this a static class member, since it is used only inside the scope of the class below. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author llsj14 Aug 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thank you, I fixed this Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator alexm-redhat commented Aug 7, 2024 btw, I would rename the topic of the PR to "[Performance] ....", since it is not a bugfix All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . llsj14 changed the title [Bugfix][Core] Optimize the performance of evictor v1 and v2 by applying a priority queue and lazy deletion [Performance][Core] Optimize the performance of evictor v1 and v2 by applying a priority queue and lazy deletion Aug 7, 2024 Copy link Contributor Author llsj14 commented Aug 9, 2024 /ready All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 9, 2024 llsj14 force-pushed the feat/optimize-evict branch
from fd520b2 to 273da1d Compare August 26, 2024 02:41 Copy link Contributor Author llsj14 commented Aug 26, 2024 I rebased codes to resolve the conflict All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . simon-mo requested review from zhuohan123 , youkaichao , comaniac and njhill as code owners November 26, 2024 05:49 Copy link mergify bot commented Nov 26, 2024 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @llsj14 . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Nov 26, 2024 llsj14 force-pushed the feat/optimize-evict branch
from 273da1d to 5d2bbcc Compare November 29, 2024 03:55 mergify bot removed
the needs-rebase label Nov 29, 2024 llsj14 force-pushed the feat/optimize-evict branch
from 5d2bbcc to a7ee9c4 Compare November 29, 2024 04:24 Copy link Contributor Author llsj14 commented Nov 29, 2024 @alexm-neuralmagic @Yard1 I rebased and tested my code again. I would appreciate your reviews. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . llsj14 force-pushed the feat/optimize-evict branch
from e5eb212 to 7e6b71c Compare December 11, 2024 14:56 Copy link Contributor Author llsj14 commented Dec 11, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . In my local test, the test_eviction_alloc_mixed sometimes passes and sometimes fails. tests/core/block/test_prefix_caching_block.py ................. [ 6%]
............................................................... [ 29%]
............................................................... [ 53%]
............................................................... [ 76%]
............................................................... [100%]
=================== 269 passed, 2 warnings in 6.49s =================== I believe the assertion in this part is not strictly necessary, because all blocks can be candidates for eviction if they have same last accessed time. The key difference is that the previous code search blocks from the beginning of the free table, while my implementation does not. @leiwen83 @cadedaniel @comaniac Could you check whether it would be fine to remove the assertion mentioned above and review my PR please? -> I just changed my code to make the test pass. I prioritized the block_id to select the earlier one under the same conditions. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . llsj14 commented Dec 13, 2024 View reviewed changes vllm/core/evictor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . llsj14 force-pushed the feat/optimize-evict branch
from e82e821 to 0038286 Compare December 13, 2024 09:13 Copy link Contributor Author llsj14 commented Dec 13, 2024 @comaniac Could you review this PR, please? This PR was previously reviewed, and I have been testing its stability by running it locally for several months. It has also successfully passed unit tests and CI checks. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac reviewed Dec 13, 2024 View reviewed changes vllm/core/evictor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/evictor.py Outdated Comment on lines 92 to 106 while self.priority_queue: # Lazy deletion algorithm is applied. last_accessed, _, block_id, content_hash = heapq.heappop( self.priority_queue) if (block_id in self.free_table and self.free_table[block_id].last_accessed == last_accessed): self.free_table.pop(block_id) return block_id, content_hash Copy link Collaborator comaniac Dec 13, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I'm a bit worry about this lazy deletion algorithm as it is pretty hard to understand for others and easy to introduce bugs in corner cases. Here are some possible questions people may ask by reading this code: How a block in the heap not in the free table? A related question is why we need to cleanup the heap. How a block in the heap and the free table could have different last access time? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author llsj14 Dec 14, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @comaniac Thank you for the valuable feedback. I've added comments regarding the lazy deletion process. I understand your concerns about the lazy deletion algorithm, as it shows O(n log n) time complexity when triggered. However, since outdated entries are also removed through heap pops, I believe cleanup is not an operation that happens frequently. In fact, I also considered using doubly linked list and dictionary for this optimization. While these structures are generally O(1), I think that if the key value changes(like num_hashed_tokens in this code) from being solely based on the last accessed time (which always increases), adding entries could then take O(n) time (to make doubly linked list sorted). That’s why I opted for a priority queue... Nevertheless, I acknowledge the concerns about lazy deletion holding outdated entries. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator comaniac Dec 14, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yes I used doubly linked list in v1 prefix caching and it works well, but it would be tedious for v0. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 llsj14 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author llsj14 Dec 14, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Oh I see. I'll check the v1 implementation later as well. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions comaniac approved these changes Dec 14, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Otherwise LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/evictor.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . llsj14 and others added 14 commits December 14, 2024 01:59 feat: optimize evictor v2 performance using priority queue and lazy d… … 6a28606 …eletion
Signed-off-by: Sungjae Lee <[email protected]> refactor: make format … 461c8fd Signed-off-by: Sungjae Lee <[email protected]> refactor: use global defined variable for cleanup threshold … ad9bf4a Signed-off-by: Sungjae Lee <[email protected]> refactor: make CLEAN_THRESHOLD as a static class member … a1ef9ec Signed-off-by: Sungjae Lee <[email protected]> refactor: make format … c505a93 Signed-off-by: Sungjae Lee <[email protected]> fix: optimize priority queue cleanup operation … 02e92f7 Signed-off-by: Sungjae Lee <[email protected]> trigger test … 76e4665 Signed-off-by: Sungjae Lee <[email protected]> prioritize block_id in priority queue … 840612a Signed-off-by: Sungjae Lee <[email protected]> make format … add810e Signed-off-by: Sungjae Lee <[email protected]> retrigger test … 1c8c2b8 Signed-off-by: Sungjae Lee <[email protected]> add comment … e1d7d7a Signed-off-by: Sungjae Lee <[email protected]> make format … 0d554e4 Signed-off-by: Sungjae Lee <[email protected]> update comments … b923060 Co-authored-by: Cody Yu <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]> make format … 46798ad Signed-off-by: Sungjae Lee <[email protected]> llsj14 force-pushed the feat/optimize-evict branch
from dd3165c to 46798ad Compare December 14, 2024 01:59 Hide details View details comaniac merged commit 8869368 into vllm-project : main Dec 14, 2024 51 checks passed Uh oh! There was an error while loading. Please reload this page . xiangyuT mentioned this pull request Dec 24, 2024 Refine evictor based on #7209 analytics-zoo/vllm#70 Merged PeaBrane mentioned this pull request May 11, 2025 feat: vllm mock workers, Rusty skeleton ai-dynamo/dynamo#1033 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:28
|
f092153fbe349a9a1742940e3703bfcff6aa0a6d
|
https://github.com/vllm-project/vllm/pull/11111
| false | true | false | true |
PERF: latency | TEST: test, CI, CI
|
Copy link Collaborator WoosukKwon commented Dec 11, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR simplifies the input preparation code further while optimizing it by utilizing more persistent buffers. Creating new tensors can introduce considerable overhead for small-batch inputs, so persistent buffers effectively reduce latency. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tmp … 73a8b20 Signed-off-by: Woosuk Kwon <[email protected]> Copy link github-actions bot commented Dec 11, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon added 2 commits December 11, 2024 11:13 comment … dbac8f5 Signed-off-by: Woosuk Kwon <[email protected]> comment … 734a7b7 Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon marked this pull request as ready for review December 11, 2024 19:15 WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners December 11, 2024 19:15 WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Dec 11, 2024 Copy link Collaborator alexm-redhat commented Dec 11, 2024 Nice idea! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . alexm-redhat approved these changes Dec 11, 2024 View reviewed changes vllm/v1/worker/gpu_model_runner.py dtype=torch.int32, device="cpu", pin_memory=self.pin_memory) self.slot_mapping_np = self.slot_mapping_cpu.numpy() Copy link Collaborator alexm-redhat Dec 11, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Does the resulting numpy here shares the memory buffer of the source tensor? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author WoosukKwon Dec 12, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The returned ndarray and the tensor will share their storage, so changes to the tensor will be reflected in the ndarray and vice versa. Yes. That's the trick here :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details WoosukKwon merged commit f092153 into main Dec 12, 2024 65 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-opt-prep branch December 12, 2024 07:14 markmc mentioned this pull request Dec 12, 2024 Enable mypy checking on V1 code #11105 Merged sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [V1] Use more persistent buffers to optimize input preparation overhe… … 2e703c8 …ads ( vllm-project#11111 )
Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:32
|
3b61cb450d899dc423feb264c297d4d18d701678
|
https://github.com/vllm-project/vllm/pull/10989
| false | false | false | true |
TEST: test, CI, CI
|
Copy link Collaborator WoosukKwon commented Dec 8, 2024 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR reduces the CPU ops in V1 flash-attn: two slice ops for key and value by slightly modifying the reshape_and_cache_flash op. Also, it uses kv_cache.unbind(0) instead of kv_cache[0] and kv_cache[1] , to reduce the number of ops. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👀 1 tlrmchlsmth reacted with eyes emoji All reactions 👀 1 reaction WoosukKwon added 6 commits December 4, 2024 21:06 tmp … d34c4a8 Signed-off-by: Woosuk Kwon <[email protected]> minor … 14e2f77 Signed-off-by: Woosuk Kwon <[email protected]> fix … fc025ec Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-cache-opt 001ad42 minor … 194fa9e Signed-off-by: Woosuk Kwon <[email protected]> comment … 269901d Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners December 8, 2024 11:02 Copy link github-actions bot commented Dec 8, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Dec 8, 2024 Hide details View details WoosukKwon merged commit 3b61cb4 into main Dec 9, 2024 90 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-cache-opt branch December 9, 2024 20:38 sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [V1] Further reduce CPU overheads in flash-attn ( vllm-project#10989 ) … 0ad90dd Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:34
|
9323a3153b20d4a2ca7ac04a2784609d6ce656e0
|
https://github.com/vllm-project/vllm/pull/10785
| false | true | true | true |
PERF: TTFT, Throughput, Throughput | SERVING: frontend, frontend | TEST: test, test, test
|
Copy link Collaborator aarnphm commented Nov 29, 2024 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Add initial support for XGrammar for V0 and makes it the default for grammar and json usage. Written in collaboration with @mgoin I'm using the benchmark scripts from #10557 Results for using XGrammar as backend: Throughput: 0.94 requests/s, 1022.46 total tokens/s, 480.27 output tokens/s Correct rate is 100.0 %
First token latency(msecs):
count 10.000000
mean 4552.206317
std 734.671745
min 3289.774953
25% 3864.269087
50% 5102.686635
75% 5102.717258
max 5114.346570
dtype: float64
Next token latency(msecs):
count 10.000000
mean 11.906452
std 1.409063
min 10.831970
25% 10.837367
50% 10.854235
75% 13.227200
max 14.325024
dtype: float64 Comparing to outlines Throughput: 0.22 requests/s, 241.22 total tokens/s, 113.31 output tokens/s Correct rate is 100.0 %
First token latency(msecs):
count 10.000000
mean 38533.083248
std 35.807892
min 38491.813741
25% 38491.826321
50% 38556.601226
75% 38556.628519
max 38568.547848
dtype: float64
Next token latency(msecs):
count 10.000000
mean 12.955556
std 0.042220
min 12.901755
25% 12.914099
50% 12.953058
75% 12.996646
max 13.003127
dtype: float64 NOTE: Running on A100 80GB, with Llama 3.2 3B with chunked prefill enable and JSON grammar Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 9 zhouyuan, choisioo, xuechendi, ywang96, nickandbro, JakubCerven, saattrupdan, hongqing1986, and suc16 reacted with thumbs up emoji All reactions 👍 9 reactions aarnphm added 3 commits November 29, 2024 22:53 --wip-- … 41c0031 Signed-off-by: Aaron Pham <[email protected]> fix: update workaround for pickling … c17da0b Signed-off-by: Aaron Pham <[email protected]> hack: hmm it is a tuple … b29dfb3 Signed-off-by: Aaron Pham <[email protected]> aarnphm requested review from zhuohan123 , youkaichao , alexm-redhat , comaniac and njhill as code owners November 29, 2024 23:45 Copy link github-actions bot commented Nov 29, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added documentation Improvements or additions to documentation ci/build labels Nov 29, 2024 revert: bad merge … 1be065b Signed-off-by: Aaron Pham <[email protected]> aarnphm marked this pull request as draft November 29, 2024 23:46 aarnphm added 2 commits November 30, 2024 00:08 fix: correct use apply_token_bitmask interface … ee8e796 Signed-off-by: Aaron Pham <[email protected]> fix: correctness for prefill … cef4201 Signed-off-by: Aaron Pham <[email protected]> aarnphm marked this pull request as ready for review November 30, 2024 00:16 aarnphm added 3 commits November 30, 2024 00:23 fix: lint error … 919e5f8 Signed-off-by: Aaron Pham <[email protected]> fix: annotations … 4d6585b Signed-off-by: Aaron Pham <[email protected]> fix: format … 5d2a43c Signed-off-by: Aaron Pham <[email protected]> Ubospica reviewed Nov 30, 2024 View reviewed changes Copy link Ubospica left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for your contribution to integrating XGrammar into vLLM! It overall looks good, but there are some minor points to enhance parallelism. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/guided_decoding/__init__.py Outdated guided_params: GuidedDecodingParams, tokenizer ) -> Optional[ LogitsProcessor ] : guided_params: GuidedDecodingParams, tokenizer: PreTrainedTokenizer, model_config: ModelConfig ) -> LogitsProcessor | None : # CFG grammar not supported by LMFE, so we use outlines instead if guided_params.backend == 'outlines' or guided_params.grammar: Copy link Ubospica Nov 30, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment XGrammar can also do grammar decoding and accelerate it. The grammar formats for XGrammar and Outlines are different. XGrammar uses GBNF format, while Outlines uses lark grammar. That might be documented. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author aarnphm Dec 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment i see, I will add this difference into the docs Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author aarnphm Dec 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think we should just remove the grammar check here. If user send grammar they should also specify the backend (probably better to document the cartesian product of the combinations) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 Ubospica reacted with thumbs up emoji All reactions 👍 1 reaction vllm/model_executor/guided_decoding/xgrammar_decoding.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/guided_decoding/xgrammar_decoding.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . joennlae added a commit
to 44ai-labs/vllm
that referenced
this pull request Dec 1, 2024 [Core] add xgrammar as guided generation provider … d326148 Essentially a cleaned up version of this `pr`: vllm-project#10785 Especially since `outlines` is rather slow and the new version is though
to intergrate as they do not focus on being pickleable which is a key
feature for us using the multiprocessing engine: dottxt-ai/outlines-core#99 I assume more and more will change over to `xgrammar`.
This is a minimum implementation. https://arxiv.org/pdf/2411.15100 Signed-off-by: Jannis Schönleber <[email protected]> joennlae mentioned this pull request Dec 1, 2024 [Core] add xgrammar as guided generation provider #10803 Closed aarnphm and others added 3 commits November 30, 2024 20:54 chore: remove grammar mode branch with outlines … 3770400 Signed-off-by: Aaron Pham <[email protected]> Add caching for tokenizer data and grammar compiler … 865e2a3 Signed-off-by: mgoin <[email protected]> Merge branch 'feat/xgrammar' of https://github.com/aarnphm/vllm into … … e5684e2 …feat/xgrammar Copy link Member mgoin commented Dec 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Updated this PR with caches for the tokenizer data and the grammar compiler to avoid constructing these data structures for each request. It isn't pretty but it boosts throughput by about 1.4x. I need to perform more profiling but we are limited by the required-serialization architecture that we currently have. We plan to move the FSM initialization out of the frontend to both simplify the implementation and speed up TTFT. Setup: Llama-3.1-8B-Instruct, 1xH100 Command: python benchmark_guided.py --model meta-llama/Llama-3.1-8B-Instruct --dataset xgrammar_bench --async-engine --output-len 512 --num-prompts 20 --enable-chunked-prefill --guided-decoding-ratio 1 Before: Throughput: 1.46 requests/s, 1189.12 total tokens/s, 748.00 output tokens/s Correct rate is 95.0 %
First token latency(msecs):
count 20.000000
mean 7180.142369
std 1212.973158
min 4644.173431
25% 7012.610644
50% 7578.541221
75% 8079.524654
max 8092.886029
dtype: float64
Next token latency(msecs):
count 20.000000
mean 12.662371
std 2.336552
min 10.942158
25% 10.942283
50% 11.864077
75% 12.990130
max 17.550802
dtype: float64 After: Throughput: 2.12 requests/s, 1726.67 total tokens/s, 1086.13 output tokens/s Correct rate is 95.0 %
First token latency(msecs):
count 20.000000
mean 3254.682581
std 290.516334
min 2869.083916
25% 2869.120228
50% 3449.280638
75% 3477.460549
max 3477.504314
dtype: float64
Next token latency(msecs):
count 20.000000
mean 12.054585
std 0.550868
min 11.643879
25% 11.643967
50% 11.674903
75% 12.786106
max 12.786302
dtype: float64 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . joennlae added a commit
to 44ai-labs/vllm
that referenced
this pull request Dec 1, 2024 [Core] add xgrammar as guided generation provider … caf4289 Essentially a cleaned up version of this `pr`: vllm-project#10785 Especially since `outlines` is rather slow and the new version is though
to intergrate as they do not focus on being pickleable which is a key
feature for us using the multiprocessing engine: dottxt-ai/outlines-core#99 I assume more and more will change over to `xgrammar`.
This is a minimum implementation. https://arxiv.org/pdf/2411.15100 Signed-off-by: Jannis Schönleber <[email protected]> dongxiaolong mentioned this pull request Dec 2, 2024 [Feature]: Integrate with XGrammar for zero-overhead structured generation in LLM inference. #10660 Closed 1 task Copy link Member mgoin commented Dec 2, 2024 @Ubospica do you know when XGrammar can support regex? This would help with covering existing use cases All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 7 hidden items Load more… mgoin added 3 commits December 2, 2024 22:54 Fix tests and support json_object … 8962301 Signed-off-by: mgoin <[email protected]> Fix test 8d3c671 Merge branch 'main' into feat/xgrammar 9f97093 mgoin requested review from DarkLight1337 , robertgshaw2-redhat and simon-mo as code owners December 2, 2024 22:56 mergify bot added
the frontend label Dec 2, 2024 simon-mo changed the title [Core][Performance] Add XGrammar support for guided decoding [Core][Performance] Add XGrammar support for guided decoding and set it as default Dec 3, 2024 simon-mo previously approved these changes Dec 3, 2024 View reviewed changes vllm/entrypoints/llm.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . simon-mo dismissed
their stale review December 3, 2024 01:41 if isinstance(params, Sequence) else copy.copy(params), is actually a blocking review. We can only introduce it if it is not perf regression. Move copy down into guided decoding case … 975e040 Signed-off-by: mgoin <[email protected]> Copy link Member mgoin commented Dec 3, 2024 Thanks for review @simon-mo I moved the copy into a specific if sampling_params.guided_decoding is not None case - ready for re-review All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . aarnphm added 2 commits December 2, 2024 22:11 chore: fix coallesce type … 59221e6 Signed-off-by: Aaron Pham <[email protected]> chore: add notes for performance … 5f49734 Signed-off-by: Aaron Pham <[email protected]> aarnphm force-pushed the feat/xgrammar branch
from 4ee464a to 5f49734 Compare December 3, 2024 03:16 simon-mo approved these changes Dec 3, 2024 View reviewed changes Hide details View details DarkLight1337 merged commit 9323a31 into vllm-project : main Dec 3, 2024 73 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Member hmellor commented Dec 3, 2024 The new dependency in this PR appears to have broken installation on ARM 8.373 ERROR: Could not find a version that satisfies the requirement xgrammar (from versions: none)
8.419 ERROR: No matching distribution found for xgrammar
------
Dockerfile.arm:37
--------------------
36 |
37 | >>> RUN --mount=type=cache,target=/root/.cache/pip \
38 | >>> --mount=type=bind,src=requirements-common.txt,target=requirements-common.txt \
39 | >>> --mount=type=bind,src=requirements-cpu.txt,target=requirements-cpu.txt \
40 | >>> pip install -v -r requirements-cpu.txt
41 |
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -v -r requirements-cpu.txt" did not complete successfully: exit code: 1 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Dec 3, 2024 Thanks for reporting @hmellor indeed it seems there isn't a manylinux arm wheel available https://pypi.org/project/xgrammar/#files I'll work on a patch fix All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin mentioned this pull request Dec 3, 2024 [Bugfix] Only require XGrammar on x86 #10865 Merged Copy link stefanobranco commented Dec 3, 2024 Obviously super cool to see new integrations, but it does seem a bit hasty to me to immediately change the default? The implementation with outlines core should be able to close the gap after all, and this one does not support regex yet. Or is xgrammar just objectively better? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor joennlae commented Dec 3, 2024 I second this opinion. Currently, the same behaviour cannot be expected from 'grammar`. I added a simple PR with some rudimentary regex + integer range support ( mlc-ai/xgrammar#106 ). I can attest that it is much faster, especially if one uses dynamic schemas. However, we should use outlines as the default, as it supports more cases for now, and the change is not breaking for many. I introduced it as an option in my closed PR ( #10803 ). But I forgot it when I discussed it with @mgoin . 👍 1 robcaulk reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Dec 3, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Hi @stefanobranco and @joennlae thanks for raising your concern. Our primary concern is immediately improving structured output performance where it is easy to do so while maintaining the same behavior. With xgrammar as the default in supported cases, we still fallback to outlines in several cases covered here https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/guided_decoding/__init__.py#L18-L48 Please let me know if a case isn't being accounted for that is affecting your usage. We do not want to change external behavior. We have several integration tests that I have been using to create these rules, but more test points are certainly welcome! We have several fast-followup items to reduce the special cases around using xgrammar and improving performance even further in V0. We are also working on enabling outlines>=0.1.8 support with the devs of that project. Then of course we will enable the usage of structured output in V1. I hope this is helpful context and we will work on making a public roadmap for longer term goals. Please join the #feat-structured-output channel in slack if you want to have more direct discussion with the people working on this. 👍 2 stefanobranco and joennlae reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin mentioned this pull request Dec 4, 2024 [Bugfix] Fallback to outlines for complex json schemas #10899 Merged Copy link Ubospica commented Dec 5, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Thanks @stefanobranco , @joennlae , @ @mgoin for great feedbacks. The first initial release of XGrammar focuses on performance across grammar and json schema. We would like to ensure the system is holistically design to ensure zero overhead structure output, which aligns with many users needs we also see. Now that initial release land, we are working full steam to enable full support for JSON schema and regex. Thank you for these great feedbacks and please feel free to open new issues on XGrammar to give us feedbacks. Our general mission is to enable bringing flexible, zero-overhead structured generation everywhere, and we are excited to work with the community here to achieve that mission together, thank you for these feedbacks and we love contributions and collaborations to bring better, zero-overhead structured output for everyone 👍 3 Swipe4057, saattrupdan, and WangErXiao reacted with thumbs up emoji All reactions 👍 3 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [Core][Performance] Add XGrammar support for guided decoding and set … … edebf1d …it as default ( vllm-project#10785 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> Copy link ktrapeznikov commented Dec 19, 2024 will this support models that use mistral tokenizers? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor robcaulk commented Feb 14, 2025 @joennlae Pointed out correctly that changing the default value from outlines to xgrammar was a breaking change. This should have been highlighted in the release notes as a breaking change. @mgoin you had the foresight to avoid changing behavior, but unfortunately, this change did change the behavior. The issue now is that the quality of output from xgrammar is not as high. It does not conform to Literal definitions in the schema. Outlines does. This broke quite a bit of our pipeline - as we require Literals. We will define outlines explicitly now to avoid the shortcoming of xgrammar, but I highly recommend to the maintainers ( @simon-mo ) that any breaking changes be properly highlighted in release notes in the future. 👍 2 aastroza and simon-mo reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . aarnphm deleted the feat/xgrammar branch March 19, 2025 11:02 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:38
|
98f47f2a4032f8c395268de80858c64ffcfc60fa
|
https://github.com/vllm-project/vllm/pull/10733
| false | true | false | true |
PERF: latency, latency, latency | TEST: test, test, CI
|
Copy link Collaborator WoosukKwon commented Nov 28, 2024 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . With piece-wise CUDA graphs, we have to make sure that the attention custom op causes minimal CPU overheads. This PR made a few changes to optimize the CPU overheads in the FlashAttention custom op: We directly use torch.ops.vllm_flash_attn_c.varlen_fwd rather than flash_attn_varlen_func , since FlashAttnFunc which inherits torch.autograd.Function causes unnecessary overheads. We move the reshapes and shape check logics to outside of the custom op, so that they can be done at the CUDA graph capture time. Results of python benchmarks/benchmark_latency.py (opt-125m) on a single H100 GPU: V1 main: 227 ms V1 this PR: 192 ms V0 + 8-step: 130 ms Next step: further reduce the unnecessary CPU ops inside the FlashAttention op. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners November 28, 2024 04:05 Copy link github-actions bot commented Nov 28, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Nov 28, 2024 Copy link Member youkaichao commented Nov 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . well, I think I forgot to update the v1 flash attention file, after #10558 , you don't need the torch.ops.vllm.unified_v1_flash_attention call. nvm All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . youkaichao reviewed Nov 28, 2024 View reviewed changes vllm/v1/attention/backends/flash_attn.py Outdated @@ -203,23 +209,31 @@ def unified_v1_flash_attention( v_scale, ) attn_output = flash_attn_varlen_func( Copy link Member youkaichao Nov 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment can you also update the corresponding v0 code? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth approved these changes Nov 28, 2024 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looking at profile results on #9856 , this saves about 60µs off of the CPU time spent in each flash attention call (approx 300µs -> 240µs) Thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 2 mgoin and WoosukKwon reacted with rocket emoji All reactions 🚀 2 reactions mgoin approved these changes Nov 28, 2024 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM with Kaichao's comment, thanks for quickly improving this. The failing test is due to neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16 and unrelated Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Re … 456980b Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon force-pushed the v1-flash-opt branch
from e4f8b06 to 456980b Compare November 28, 2024 16:45 Copy link Collaborator Author WoosukKwon commented Nov 28, 2024 @youkaichao @mgoin As we merged vllm-project/flash-attention#30 , we don't have to directly use torch.ops.vllm_flash_attn_c.varlen_fwd . We can just use flash_attn_varlen_func as we currently do. Both V0 and V1 already gets the benefits after vllm-project/flash-attention#30 . 👀 2 mgoin and youkaichao reacted with eyes emoji All reactions 👀 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author WoosukKwon commented Nov 28, 2024 One weird phenomenon I found is that V1 has a spike in latency: Avg latency: 0.20093455887205589 seconds
10% percentile latency: 0.1931818482640665 seconds
25% percentile latency: 0.19354040725738741 seconds
50% percentile latency: 0.19391279752017 seconds
75% percentile latency: 0.19426249974640086 seconds
90% percentile latency: 0.1961068181961309 seconds
99% percentile latency: 0.3368887884780999 seconds This is highly reproducible on my dev machine. Can this be because of Python gc or something like that? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details WoosukKwon merged commit 98f47f2 into main Nov 28, 2024 15 of 18 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-flash-opt branch November 28, 2024 17:01 Copy link Collaborator robertgshaw2-redhat commented Nov 29, 2024 One weird phenomenon I found is that V1 has a spike in latency: Avg latency: 0.20093455887205589 seconds
10% percentile latency: 0.1931818482640665 seconds
25% percentile latency: 0.19354040725738741 seconds
50% percentile latency: 0.19391279752017 seconds
75% percentile latency: 0.19426249974640086 seconds
90% percentile latency: 0.1961068181961309 seconds
99% percentile latency: 0.3368887884780999 seconds This is highly reproducible on my dev machine. Can this be because of Python gc or something like that? It’s probably the prefix caching … All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Nov 29, 2024 Hmm but benchmark_latency.py does sample each prompts separately: https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_latency.py#L36 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Nov 29, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Hmm but benchmark_latency.py does sample each prompts separately: https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_latency.py#L36 Just found that it has a warmup phase. It's still possible due to prefix caching if all prompts are cached then. Suggest to explicitly disable prefix caching to double check. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author WoosukKwon commented Dec 1, 2024 @comaniac @robertgshaw2-neuralmagic You're right. The latency becomes stable when prefix caching is turned off. Avg latency: 0.1945609479948568 seconds
10% percentile latency: 0.19310778125654907 seconds
25% percentile latency: 0.19390572598786093 seconds
50% percentile latency: 0.19475348049309105 seconds
75% percentile latency: 0.195164829317946 seconds
90% percentile latency: 0.19570096801035106 seconds
99% percentile latency: 0.1962820820847992 seconds All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . afeldman-nm pushed a commit
to neuralmagic/vllm
that referenced
this pull request Dec 2, 2024 [V1] Optimize the CPU overheads in FlashAttention custom op ( vllm-pro… … bc6637c …ject#10733 )
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Andrew Feldman <[email protected]> sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [V1] Optimize the CPU overheads in FlashAttention custom op ( vllm-pro… … 17b4a20 …ject#10733 )
Signed-off-by: Woosuk Kwon <[email protected]> anko-intel pushed a commit
to HabanaAI/vllm-fork
that referenced
this pull request Feb 12, 2025 [V1] Optimize the CPU overheads in FlashAttention custom op ( vllm-pro… … 34de378 …ject#10733 )
Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:41
|
8c1e77fb585c4f42783a3d88c1efc7c9e15fd89f
|
https://github.com/vllm-project/vllm/pull/10742
| false | false | false | true |
TEST: test, test, test
|
Copy link Collaborator WoosukKwon commented Nov 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Upgrades to vllm-project/flash-attention#30 , which will help reduce CPU overheads in launching the kernels. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Nov 28, 2024 WoosukKwon requested a review
from tlrmchlsmth as a code owner November 28, 2024 09:41 Copy link github-actions bot commented Nov 28, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the ci/build label Nov 28, 2024 WoosukKwon mentioned this pull request Nov 28, 2024 Clean up API & Bypass torch.autograd.Function vllm-project/flash-attention#30 Merged Copy link mergify bot commented Nov 28, 2024 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @WoosukKwon . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Nov 28, 2024 fix … 892cdce Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon force-pushed the test-fa branch
from 99c45ad to 892cdce Compare November 28, 2024 10:28 WoosukKwon changed the title test [Kernel] Update vllm-flash-attn version Nov 28, 2024 mergify bot removed
the needs-rebase label Nov 28, 2024 Update … 677ceb2 Signed-off-by: Woosuk Kwon <[email protected]> Hide details View details WoosukKwon merged commit 8c1e77f into main Nov 28, 2024 9 of 14 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the test-fa branch November 28, 2024 16:31 afeldman-nm pushed a commit
to neuralmagic/vllm
that referenced
this pull request Dec 2, 2024 [Kernel] Update vllm-flash-attn version to reduce CPU overheads ( vllm… … 1362dac …-project#10742 )
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Andrew Feldman <[email protected]> sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [Kernel] Update vllm-flash-attn version to reduce CPU overheads ( vllm… … 5496147 …-project#10742 )
Signed-off-by: Woosuk Kwon <[email protected]> anko-intel pushed a commit
to HabanaAI/vllm-fork
that referenced
this pull request Feb 12, 2025 [Kernel] Update vllm-flash-attn version to reduce CPU overheads ( vllm… … c71b17d …-project#10742 )
Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:44
|
b2e0ad3b598ed0e022cdbd678a20821d411873c2
|
https://github.com/vllm-project/vllm/pull/10339
| false | true | false | true |
PERF: profile, profile, profiling | TEST: test, CI, CI
|
Copy link Collaborator andoorve commented Nov 14, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Maintaining multiple names here will cause both to be refcounted which increases the peak memory. This will manifest as more blocks on top of each other in the memory profile: This change will increase the number of available blocks as a result of profiling especially with longer context lengths. I will follow up with a more detailed investigation in another PR/Issue that discusses this in more depth. However, creating this PR as well now as this is more or less a well-contained low-risk change. Can add to more models as well once we review this. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions andoorve marked this pull request as ready for review November 14, 2024 18:38 Copy link github-actions bot commented Nov 14, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . andoorve requested review from DarkLight1337 and youkaichao November 14, 2024 18:38 [Perf] Reduce peak memory usage … 5625ebe Maintaining multiple names here will cause both to be refcounted which increases the peak memory. This will manifest as more blocks on top of each other in the memory profile.
Signed-off-by: andoorve <[email protected]> andoorve force-pushed the llama-memory branch
from 358dd7e to 5625ebe Compare November 14, 2024 18:44 Copy link Member mgoin commented Nov 14, 2024 Great idea! We could apply this to many other models ❤️ 1 andoorve reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . andoorve requested a review
from mgoin November 14, 2024 20:12 mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Nov 14, 2024 DarkLight1337 enabled auto-merge (squash) November 14, 2024 23:38 DarkLight1337 approved these changes Nov 14, 2024 View reviewed changes youkaichao reviewed Nov 15, 2024 View reviewed changes vllm/model_executor/models/llama.py @@ -90,8 +90,8 @@ def __init__( self.act_fn = SiluAndMul() def forward(self, x): gate_up, _ = self.gate_up_proj(x) Copy link Member youkaichao Nov 15, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think torch.compile can do something similar, without renaming variables. to keep the original semantic, maybe adding del x would be more intuitive. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author andoorve Nov 15, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think torch.compile can do something similar, without renaming variables. Yes, it can completely alleviate this problem, even when we consider cross-function refcounting which I'll cover in my investigation write-up. to keep the original semantic, maybe adding del x would be more intuitive. I think you might mean in this case del gate_up ? Yes indeed we can add del s and make the variable names more descriptive. I just kept it as x to avoid adding extra del s and be similar to style of the rest of the function. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details DarkLight1337 merged commit b2e0ad3 into vllm-project : main Nov 15, 2024 63 checks passed Uh oh! There was an error while loading. Please reload this page . andoorve deleted the llama-memory branch November 15, 2024 00:56 andoorve mentioned this pull request Nov 20, 2024 [DNM][Discussion] Example to decrease live tensors for activation memory. #10473 Closed sleepwalker2017 pushed a commit
to sleepwalker2017/vllm
that referenced
this pull request Dec 13, 2024 [Perf] Reduce peak memory usage of llama ( vllm-project#10339 ) … d26d246 Signed-off-by: andoorve <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:46
|
81ede99ca44a5b3518932a07ea4a76a719e7416e
|
https://github.com/vllm-project/vllm/pull/8704
| false | true | true | true |
PERF: speedup | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, test
|
Copy link Collaborator KuntaiDu commented Sep 22, 2024 This PR deprecates block manager v1 and makes block manager v2 the default to simplify the code path. This is supported by this benchmark , where block manager v2 is <2% slower than block manager v1 on Llama 8B when no prefix hit, and has significant speedup upon full prefix hit. Summary of changes: Leave --use-v2-block-manager in the EngineArgs for compatibility Remove use_v2_block_manager flag in all tests and configs (except during initialization), so that the value change of use-v2-block-manager has no effect on vLLM behavior. BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Adding or changing kernels Each custom kernel needs a schema and one or more implementations to be registered with PyTorch. Make sure custom ops are registered following PyTorch guidelines: Custom C++ and CUDA Operators and The Custom Operators Manual Custom operations that return Tensors require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions. Use torch.libary.opcheck() to test the function registration and meta-function for any registered ops. See tests/kernels for examples. When changing the C++ signature of an existing op, the schema must be updated to reflect the changes. If a new custom type is needed, see the following document: Custom Class Support in PT2 . Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 1 cadedaniel reacted with hooray emoji All reactions 🎉 1 reaction KuntaiDu added 7 commits September 20, 2024 05:09 remove block_manager_v1 and rename block_manager_v2 to block_manager 53cac04 remove block manager v2 related args f199d95 move the version name of block manager from v2 to main da0f9e3 remove flags that set use-v2-block-manager 59ee8fb remove v2 block manager d12ced7 remove warnings with blockmanagerv1 3203112 remove block manager v2 45d35ba Copy link github-actions bot commented Sep 22, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Sep 22, 2024 FYI: @sroy745 has #8678 verifying the functional correctness. Could you folks coordinate on this? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author KuntaiDu commented Sep 22, 2024 Sure! This PR will be a draft PR until @sroy745 verifies all the tests. I will also talk to @sroy745 and see if I can help. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . KuntaiDu marked this pull request as draft September 22, 2024 06:55 Copy link Collaborator comaniac commented Sep 22, 2024 Sure! This PR will be a draft PR until @sroy745 verifies all the tests. I will also talk to @sroy745 and see if I can help. Thanks! @sroy745 has identified some failed tests and is fixing them. We could have a tracking issue and work together on fixing them. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sroy745 commented Sep 22, 2024 Sure! This PR will be a draft PR until @sroy745 verifies all the tests. I will also talk to @sroy745 and see if I can help. Thanks! @sroy745 has identified some failed tests and is fixing them. We could have a tracking issue and work together on fixing them. I filed #8718 to track the unit test failures. I am currently looking at the test_scheduler.py failures. 👍 2 comaniac and KuntaiDu reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . KuntaiDu added 3 commits September 25, 2024 16:07 Merge branch 'main' into kuntai-remove-blockmngerv1 ba12509 remove use_v2_block_manager in Speculative decoding config 479104c make format checker happy 17ccfd6 KuntaiDu added
the ready ONLY add when PR is ready to merge/full CI is needed label Sep 25, 2024 Copy link Collaborator Author KuntaiDu commented Sep 25, 2024 Add ready to trigger full set of CI and see which test fails All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sroy745 commented Sep 26, 2024 fyi I have one pr in flight #8824 which fixes the last of the know test failures that I found earlier. 👍 1 KuntaiDu reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . KuntaiDu added 6 commits September 26, 2024 01:59 remove v2 block manager flag c97fdac bug fix: change BlockManager to MainBlockManager 25584f7 fix wrong parameters in test, and remove the check for blockmanagerv1 1149e40 remove best_of 2 --- beam search is deprecated now a0e9e36 make ruff happy 243a8bd make yapf happy c95e720 KuntaiDu marked this pull request as ready for review September 26, 2024 03:07 KuntaiDu added 3 commits September 26, 2024 03:08 empty change to trigger CI 4afa3a3 ok 95231af Merge branch 'vllm-project:main' into kuntai-remove-blockmngerv1 46410be Isotr0py mentioned this pull request Sep 29, 2024 [Core][VLM] Add support for prefix caching for multi-modal models #8348 Closed 52 hidden items Load more… KuntaiDu added 4 commits October 16, 2024 06:51 make format checker happy 2e5f091 Make yapf happy ccf9362 Remove the corresponding test for "CachedBlockAllocator", which is on… … fe7ea69 …ly for block manager v1. Make ruff happy 3b7005b KuntaiDu requested a review
from comaniac October 16, 2024 19:53 Copy link Collaborator Author KuntaiDu commented Oct 16, 2024 @comaniac I fixed merge conflicts and removed some unnecessary flags and functions for block manager v1. PTAL All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac approved these changes Oct 16, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Otherwise LGTM. Also cc @sroy745 for review. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions .buildkite/test-pipeline.yaml Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tests/core/block/e2e/test_correctness.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tests/core/block/e2e/test_correctness.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tests/core/block/e2e/test_correctness.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/arg_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . sroy745 reviewed Oct 17, 2024 View reviewed changes Copy link Collaborator sroy745 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the pr!! LGTM. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/core/block/e2e/test_correctness.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tests/core/block/e2e/test_correctness.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/arg_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . KuntaiDu added 5 commits October 17, 2024 02:59 adjust test name and doc string to avoid reusing v1 and v2 in tes… … 178c260 …t name remove "v2" in the test name 4ae3567 Adjust docstrings for --use-v2-block-manager 70be1de further adjust the doc string --- use "block manager v1" and "block m… … 755fec3 …anager v2" in engine args doc string as it is more familiar for people. Merge branch 'main' into kuntai-remove-blockmngerv1 405f415 Hide details View details KuntaiDu merged commit 81ede99 into vllm-project : main Oct 17, 2024 77 checks passed Uh oh! There was an error while loading. Please reload this page . KuntaiDu deleted the kuntai-remove-blockmngerv1 branch October 17, 2024 16:38 KuntaiDu restored the kuntai-remove-blockmngerv1 branch October 17, 2024 16:43 KuntaiDu deleted the kuntai-remove-blockmngerv1 branch October 17, 2024 16:43 DarkLight1337 mentioned this pull request Oct 17, 2024 [Misc] Remove commit id file #9470 Merged KuntaiDu mentioned this pull request Oct 22, 2024 [Core] Remove evictor_v1 #9572 Merged saienduri mentioned this pull request Oct 24, 2024 update block_manager usage in setup_cython ROCm/vllm#243 Merged Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Deprecating block manager v1 and make block manager v2 default ( … … 7cd2f07 …vllm-project#8704 )
Removing the block manager v1. This is the initial piece of prefix-caching-centric design. In order to achieve prefix-caching-centric design, we need to simplify the code path so that we only use v2 block manager (which has much higher performance on prefix caching).
Signed-off-by: Alvant <[email protected]> garg-amit pushed a commit
to garg-amit/vllm
that referenced
this pull request Oct 28, 2024 [Core] Deprecating block manager v1 and make block manager v2 default ( … … fdd67ee …vllm-project#8704 )
Removing the block manager v1. This is the initial piece of prefix-caching-centric design. In order to achieve prefix-caching-centric design, we need to simplify the code path so that we only use v2 block manager (which has much higher performance on prefix caching).
Signed-off-by: Amit Garg <[email protected]> FerdinandZhong pushed a commit
to FerdinandZhong/vllm
that referenced
this pull request Oct 29, 2024 [Core] Deprecating block manager v1 and make block manager v2 default ( … … c086d36 …vllm-project#8704 )
Removing the block manager v1. This is the initial piece of prefix-caching-centric design. In order to achieve prefix-caching-centric design, we need to simplify the code path so that we only use v2 block manager (which has much higher performance on prefix caching).
Signed-off-by: qishuai <[email protected]> sumitd2 pushed a commit
to sumitd2/vllm
that referenced
this pull request Nov 14, 2024 [Core] Deprecating block manager v1 and make block manager v2 default ( … … 8e864ff …vllm-project#8704 )
Removing the block manager v1. This is the initial piece of prefix-caching-centric design. In order to achieve prefix-caching-centric design, we need to simplify the code path so that we only use v2 block manager (which has much higher performance on prefix caching).
Signed-off-by: Sumit Dubey <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Core] Deprecating block manager v1 and make block manager v2 default ( … … f09498c …vllm-project#8704 )
Removing the block manager v1. This is the initial piece of prefix-caching-centric design. In order to achieve prefix-caching-centric design, we need to simplify the code path so that we only use v2 block manager (which has much higher performance on prefix caching).
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:50
|
83450458339b07765b0e72a822e5fe93eeaf5258
|
https://github.com/vllm-project/vllm/pull/9333
| false | true | false | true |
PERF: latency, latency, latency | TEST: test, CI, CI
|
Copy link Collaborator LiuXiaoxuanPKU commented Oct 14, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . After benchmarking the performance of ngram in vllm, it seems that the proposal time is longer than expected. The main reason is that there are (1) CPU <-> GPU communication when building the ngram lookup table. (2) Building the ngram contains many small kernels (duration < 5 microseconds) as show below: Zoom in the propose time: The PR tries to (1) perform lookup operation on CPU. (2) trigger CPU <-> GPU communication only when there is a match in lookup. Some performance numbers on a single H100: input_len: 550, output_len: 150 I changed the prompt to try different system efficiency (which might include the number of CPU <-> GPU sync). System efficiency propose time before this PR propose time after this PR end2end latency before this PR end2end latency after this PR 0.31 4.4ms 2.2ms 6.4s 5.6s 0.63 3.3ms 1.5ms 3.8s 3.2s 0.80 2.6 ms 1.5ms 3.0s 2.6s input_len: 2048, output_len: 150 System efficiency propose time before this PR propose time after this PR end2end latency before this PR end2end latency after this PR 0.30 6.00ms 4.54ms 9.83s 9.25s 0.63 2.90ms 2.70ms 5.84s 5.45s Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 comaniac, trianxy, and exceedzhang reacted with thumbs up emoji 🚀 2 mgoin and trianxy reacted with rocket emoji All reactions 👍 3 reactions 🚀 2 reactions LiuXiaoxuanPKU added 2 commits October 13, 2024 22:45 lookup on cpu 7d631fb remove comments cc8e7a6 Copy link github-actions bot commented Oct 14, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . format 083897a comaniac approved these changes Oct 14, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator comaniac commented Oct 14, 2024 btw does this approach still have speedup if the prompt length is much longer? I'm just thinking about the trade off between CPU-GPU sync overhead and (maybe) slower CPU computation. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LiuXiaoxuanPKU commented Oct 14, 2024 btw does this approach still have speedup if the prompt length is much longer? I'm just thinking about the trade off between CPU-GPU sync overhead and (maybe) slower CPU computation. Yeah will do more benchmarks here All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin approved these changes Oct 14, 2024 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think this is worth considering just for the aspect of simplicity. It could even make sense to write a CPU kernel in C++ instead of trying to do it on GPU Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Oct 14, 2024 Copy link Collaborator Author LiuXiaoxuanPKU commented Oct 16, 2024 Will change the PR so that we can change the device based on the sequence length. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . switch device based on seq_len 44ae31d Hide details View details mgoin merged commit 8345045 into vllm-project : main Oct 16, 2024 53 checks passed Uh oh! There was an error while loading. Please reload this page . Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Performance][Spec Decode] Optimize ngram lookup performance ( vllm-pr… … 86678fd …oject#9333 )
Signed-off-by: Alvant <[email protected]> garg-amit pushed a commit
to garg-amit/vllm
that referenced
this pull request Oct 28, 2024 [Performance][Spec Decode] Optimize ngram lookup performance ( vllm-pr… … 10d88b1 …oject#9333 )
Signed-off-by: Amit Garg <[email protected]> FerdinandZhong pushed a commit
to FerdinandZhong/vllm
that referenced
this pull request Oct 29, 2024 [Performance][Spec Decode] Optimize ngram lookup performance ( vllm-pr… … b55f889 …oject#9333 )
Signed-off-by: qishuai <[email protected]> sumitd2 pushed a commit
to sumitd2/vllm
that referenced
this pull request Nov 14, 2024 [Performance][Spec Decode] Optimize ngram lookup performance ( vllm-pr… … 1e9f47e …oject#9333 )
Signed-off-by: Sumit Dubey <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Performance][Spec Decode] Optimize ngram lookup performance ( vllm-pr… … 2a3ec7b …oject#9333 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:54
|
6d646d08a2e0e73e83e313a5ae470c1f9e4f200e
|
https://github.com/vllm-project/vllm/pull/8050
| false | true | true | true |
PERF: TTFT, TTFT, TTFT | SERVING: Serving, Serving | TEST: test, CI, CI
|
Copy link Collaborator alexm-redhat commented Aug 31, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes the async + multi-step further by implementing a "fully" async behavior between the postprocessor and the multi-step execution. Before that, the async was done only for the previous decode steps of the multi-step, where in this PR, the async is done on all previous steps of decode, including the last step of decode (that generates results), and also on the previous prompt executions. For Llama3 8B on H100 with ShareGPT dataset, performance improves by about ~28% vs current main with multi-step + async. Here are the new results for this benchmark, the TPOT of multi-step is 44.48ms and for multi-step + async is 32.38ms, which is 37% improvement (before that @KuntaiDu reported improvement < 10%) Multi-step, no-async, Llama3 8B on H100 with ShareGPT ============ Serving Benchmark Result ============
Successful requests: 500
Benchmark duration (s): 18.82
Total input tokens: 100895
Total generated tokens: 100377
Request throughput (req/s): 26.57
Input token throughput (tok/s): 5361.68
Output token throughput (tok/s): 5334.15
---------------Time to First Token----------------
Mean TTFT (ms): 2991.94
Median TTFT (ms): 2314.58
P99 TTFT (ms): 8385.04
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 44.48
Median TPOT (ms): 31.98
P99 TPOT (ms): 199.97
---------------Inter-token Latency----------------
Mean ITL (ms): 272.29
Median ITL (ms): 244.50
P99 ITL (ms): 1175.28
================================================== Multi-step + async, Llama3 8B on H100 with ShareGPT ============ Serving Benchmark Result ============
Successful requests: 500
Benchmark duration (s): 16.04
Total input tokens: 100895
Total generated tokens: 100403
Request throughput (req/s): 31.18
Input token throughput (tok/s): 6291.68
Output token throughput (tok/s): 6261.00
---------------Time to First Token----------------
Mean TTFT (ms): 2896.11
Median TTFT (ms): 2157.79
P99 TTFT (ms): 7457.77
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 32.38
Median TPOT (ms): 24.64
P99 TPOT (ms): 149.36
---------------Inter-token Latency----------------
Mean ITL (ms): 217.58
Median ITL (ms): 201.78
P99 ITL (ms): 999.50
================================================== TODO Cleanup the PR Verify all tests pass Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 4 SolitaryThinker, Juelianqvq, yudian0504, and WoosukKwon reacted with rocket emoji All reactions 🚀 4 reactions Copy link github-actions bot commented Aug 31, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Aug 31, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @robertgshaw2-neuralmagic @WoosukKwon @megha95 @KuntaiDu @comaniac @SolitaryThinker @njhill All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Aug 31, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . The PR is still in rough shape, since I just made it finally work after fixing some complicated race conditions. Will work on cleaning it up tomorrow. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator robertgshaw2-redhat commented Aug 31, 2024 nice job alex All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Aug 31, 2024 /ready All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 31, 2024 Copy link Collaborator Author alexm-redhat commented Aug 31, 2024 The PR is ready for review All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . alexm-redhat added 2 commits August 31, 2024 20:18 Optimize async + multi-step by making async fully async with respect … … dafa498 …to all operations format ca993c7 alexm-redhat force-pushed the async_multi_step_opt branch
from e269cc7 to ca993c7 Compare August 31, 2024 20:41 Copy link Collaborator Author alexm-redhat commented Aug 31, 2024 rebased over Andy's logprobs changes, all works All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cleanup f054d70 alexm-redhat changed the title [Performance][Core] Optimize Async + Multi-step [Core] Optimize Async + Multi-step Sep 1, 2024 alexm-redhat added 3 commits September 1, 2024 01:38 fix tests 98a55d7 ping 4474b12 Improve asyncio queues append of request outputs 904006a Copy link Collaborator KuntaiDu commented Sep 2, 2024 Nice job Alex! I am rerunning the benchmark using ur PR and thank you for the great work!!! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac approved these changes Sep 3, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM. Only nits Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/engine/llm_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/llm_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/llm_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/llm_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/llm_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/llm_engine.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/async_llm_engine.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/engine/output_processor/multi_step.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Cody's review comments 3a8726a comaniac enabled auto-merge (squash) September 3, 2024 16:34 More Cody's comments 997c525 auto-merge was automatically disabled September 3, 2024 16:55 Head branch was pushed to by a user without write access comaniac enabled auto-merge (squash) September 3, 2024 17:20 SolitaryThinker approved these changes Sep 3, 2024 View reviewed changes megha95 reviewed Sep 3, 2024 View reviewed changes tests/multi_step/test_correctness_async_llm.py @@ -103,13 +103,13 @@ async def test_multi_step( model, server_args + distributed_args, num_logprobs, max_wait_seconds=3 * 240) Copy link Contributor megha95 Sep 3, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment why was this change needed? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Sep 3, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It was increased originally for multi-step tests, but I think it was still sensitive, so I had one instance when I had a timeout. Increasing more did make the test stable. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details comaniac merged commit 6d646d0 into vllm-project : main Sep 3, 2024 39 checks passed Uh oh! There was an error while loading. Please reload this page . Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Optimize Async + Multi-step ( vllm-project#8050 ) … 4284212 Signed-off-by: Alvant <[email protected]> WhoisZihan reviewed Nov 1, 2024 View reviewed changes vllm/worker/multi_step_model_runner.py @@ -237,14 +265,22 @@ def _async_process_outputs(self, model_input: StatefulModelInput, output_proc_callback: Callable): # Proceed with pythonization and output_proc in order. # Stop on the first one that fails to pythonize output_proc_callback() Copy link WhoisZihan Nov 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Why do we need this extra output callback before we call it for each cached output below? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Core] Optimize Async + Multi-step ( vllm-project#8050 ) … 5f4e3ee Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:47:57
|
6e36f4fa6ce64619b9ea94c88a157f5783a63a65
|
https://github.com/vllm-project/vllm/pull/7874
| false | true | true | true |
PERF: throughput, throughput, throughput | SERVING: serving, API server, OpenAI API server | TEST: test, test, test
|
Copy link Contributor noooop commented Aug 26, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . SUMMARY: vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. Prioritizing prefill causes and aggravate system thrashing. FILL IN THE PR DESCRIPTION HERE FIX #7592 by definition By default, vLLM scheduler prioritizes prefills ... Once chunked prefill is enabled, the policy is changed to prioritize decode requests. The easiest fix is sort the running queue. Keeping chunked prefill performance the untouched, everyone is happy. BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 lambdaq reacted with thumbs up emoji 🚀 2 Juelianqvq and simon-mo reacted with rocket emoji All reactions 👍 1 reaction 🚀 2 reactions Copy link github-actions bot commented Aug 26, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 26, 2024 @youkaichao All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 26, 2024 thanks for the contribution! please fix the format issue. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 26, 2024 I don't get it though, why this would affect chunked prefill so much 👀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Aug 26, 2024 Thanks for the fix! I have the same question as Kaichao. Why sorting running requests by their arrival time impacts the throughput significantly? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Putting definitions and conventions aside first, let's discuss the pros and cons of chunked_prefill prioritizing scheduling prefill and prioritizing decoding. GPU memory limitations (gpu cache block limitations) When the GPU memory is sufficient, or max_num_batched_tokens and max_num_seqs are within a reasonable range, priority scheduling prefill can allow as many tasks as possible to enter decode mode, and even the entire batch is in decode mode, triggering CUDA graph optimization to improve throughput, but This (CUDA graph) is particularly effective for small models and when using tensor parallelism. , and when the batch is less than 256 (_BATCH_SIZES_TO_CAPTURE[-1]). So.Scenarios that favor priority scheduling of prefill are difficult to satisfy. In reality, when llm is deployed, the GPU memory is often limited, or max_num_batched_tokens and max_num_seqs are set too large, and preemption inevitably occurs. Priority scheduling decode can finish running tasks as soon as possible and release GPU memory, while priority scheduling prefill increases the number of tasks that are running at the same time, increasing the possibility of preemption. When preemption occurs, scheduling decode first means that tasks in the prefill phase are preempted and the cost is relatively small. When scheduling prefill first, tasks in the decode phase are preempted and the cost is relatively high. In short, when the GPU memory is limited, scheduling prefill first is Disaster, this is what I encountered. User satisfaction Prioritize scheduling decode, As mentioned in the documentation, "It improves ITL and generation decode because decode requests are prioritized." Why sorting matters? Give an example max_num_seqs = max_num_batched_tokens= 256 input_len = output_len = 511 init request 0: num_computed_tokens: 0, num_uncomputed_tokens 511 request 1: num_computed_tokens: 0, num_uncomputed_tokens 511 step 1: Scheduled [0] request 0: num_computed_tokens: 256, num_uncomputed_tokens 255 request 1: num_computed_tokens: 0, num_uncomputed_tokens 511 step 2: Scheduled [0, 1] request 0: num_computed_tokens: 511, num_uncomputed_tokens 1, (to enter decode mode,) request 1: num_computed_tokens: 1, num_uncomputed_tokens 510 step 3: prioritizing scheduling prefill (0.5.4~0.5.5 Scheduled [1] (Why not let request 0 decode ??????? request 0: num_computed_tokens: 511, num_uncomputed_tokens 1 request 1: num_computed_tokens: 257, num_uncomputed_tokens 254 prioritizing scheduling decode (0.5.0~0.5.3 Scheduled [0, 1] request 0: num_computed_tokens: 512, num_uncomputed_tokens 1 request 1: num_computed_tokens: 256, num_uncomputed_tokens 255 sorting matters All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 27, 2024 by the way prioritizing scheduling prefill and prioritizing decoding. the order of running_queue is exactly the opposite. But you can't just reverse the running_queue, you need modify every self.running.extend or as i said 'The easiest fix is sort the running queue.' All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Add more It is also a normal performance tuning behavior to set max_num_batched_tokens and max_num_seqs slightly larger (to slightly trigger preemption), increase parallelism, and improve throughput. But prioritizing prefill causes and aggravate system thrashing. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . LGTM to add the sorting to get back to the behavior of 0.5.3. Please fix the format. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . noooop force-pushed the main branch
from 408b727 to dd12bc8 Compare August 27, 2024 02:42 Copy link Contributor Author noooop commented Aug 27, 2024 Submit code to vllm for the first time. Is there anything else I need to do? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 27, 2024 as long as it does not break any tests, we can merge it. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 27, 2024 Thanks All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator rkooo567 commented Aug 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @noooop I think the issue is that after the refactoring, we should've changed the order of these lines to guarantee the ordering. Before the refactoring, the order was guranteed because we always sorted. Now we should more carefully extend the queue to preserve the right order. https://github.com/vllm-project/vllm/blob/ed6f002d3340888142cb67c13a37c060b51fa889/vllm/core/scheduler.py#L1029C1-L1029C72 I think if we change the order to be extend(swapped_in.decode)
extend(swapped_in.prefill)
extend(running.decode)
extend(running.prefill)
extend(new_prefill) The same behavior is preserved. can you test it? Note: without sorting, it may be difficult to always guarantee the right ordering when preemption happens, but I think that's the tradeoff 👍 1 comaniac reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator rkooo567 commented Aug 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . more specifically, change these lines self.running.extend([s.seq_group for s in prefills.seq_groups])
self.running.extend(
[s.seq_group for s in running_scheduled.decode_seq_groups])
self.running.extend(
[s.seq_group for s in running_scheduled.prefill_seq_groups])
self.running.extend(
[s.seq_group for s in swapped_in.decode_seq_groups])
self.running.extend(
[s.seq_group for s in swapped_in.prefill_seq_groups]) to self.running.extend(
[s.seq_group for s in swapped_in.decode_seq_groups])
self.running.extend(
[s.seq_group for s in swapped_in.prefill_seq_groups])
self.running.extend(
[s.seq_group for s in running_scheduled.decode_seq_groups])
self.running.extend(
[s.seq_group for s in running_scheduled.prefill_seq_groups])
self.running.extend([s.seq_group for s in prefills.seq_groups]) can you try testing it and see if it works? 👍 1 youkaichao reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 28, 2024 @rkooo567 We need to maintain the priority order of the queue. There are at least four methods to choose from. We can choose the best method from efficiency, readability, ease of use, scalability, and maybe minimal modification. Sorting when dequeue. Although slightly inefficient, no one can break it,ease to use,ease to read,and minimal modification. use PriorityQueue. Priority queue is very good option,we need priority queue, we use priority queue. The following methods are not recommended Manually maintain queue order when inqueue,with online check. maybe efficient. code that maintains order is everywhere, difficult to use, difficult to read, difficult to modification. Manually maintain queue order when inqueue,without check. ????? No one can modify this code in the future The performance bottleneck is in the GPU. I think there won't be much performance difference between Sorting and PriorityQueue, even manually maintaining queue order when inqueue. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Aug 28, 2024 The performance bottleneck is in the GPU. I think there won't be much performance difference between Sorting and PriorityQueue, even manually maintaining queue order when inqueue. This may not be true especially for online serving which we are talking about a few millisecond ITL. In fact, Python overheads like these are the main performance bottleneck. We now even need to pre-allocate and reuse Python objects, use array.array, or add a branch for edge cases (e.g., do not call sum , count when there's only one element in a list). The easiest way to verify whether this sort creates ineligible overhead is running a performance benchmark. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator rkooo567 commented Aug 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Sorting when dequeue. Although slightly inefficient, no one can break it,ease to use,ease to read,and minimal modification. Also to be clear, we used this implementation originally for exactly this reason, but vLLM currently has python overhead, and that's why we removed the sorting logic that requires repetitive queue copy. Often times, model forward only takes 10-20ms overhead only, and having 2-3ms overhead in the scheduler is critical in this kind of scenario. (if we eventually support async scheduler, we can probably come back to this implementation) I think manual sorting is the best workaround. I am not opposed to use priority queue as well if it turns out that it has no perf impact. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 28, 2024 I understand that strict orderliness is not necessary. I'm testing to see if certain queues may need to be reversed. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 28, 2024 Actually I was implementing async scheduler and stumbled upon this bug 👍 2 rkooo567 and QuantumGhost reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . noooop force-pushed the main branch
from dd12bc8 to 5245f4f Compare August 28, 2024 05:41 Copy link Contributor Author noooop commented Aug 28, 2024 It works, in fact I love is tradeoff . My own manual sorting method required too many changes, so I gave up. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac approved these changes Aug 28, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM. Thanks Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author noooop commented Aug 28, 2024 this manual sorting comparison with 0.5.3 and sorting on 1,000 requests,scheduling sequence exactly the same ❤️ 1 rkooo567 reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . rkooo567 approved these changes Aug 28, 2024 View reviewed changes Copy link Collaborator rkooo567 commented Aug 28, 2024 Awesome to hear that! btw I don't know if basic correctness test failure is related. can you try merging the latest master? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 28, 2024 ok All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . noooop force-pushed the main branch
from 370de52 to 90885c2 Compare August 28, 2024 06:46 Copy link Contributor Author noooop commented Aug 28, 2024 By the way I was implementing async scheduler. During this process, I made a huge modularization and added dynamic workflow to vllm. I don't know if you want to see it. https://github.com/noooop/light-vllm 🚀 2 Juelianqvq and youkaichao reacted with rocket emoji All reactions 🚀 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 28, 2024 I don't know why the test failed. This pr is too simple to break anything. Or the test is set up based on the wrong scheduling method All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 25 hidden items Load more… noooop reopened this Aug 30, 2024 Copy link Contributor Author noooop commented Aug 30, 2024 merg to the latest master Can anyone help me with the test? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 30, 2024 @jon-chuang Can you give me some suggestions to pass the test. Can I delete test7? How? I can't find example.txt All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor jon-chuang commented Aug 30, 2024 For this test, try making NUM_LOGPROBS contingent on fp8 dtype and set to 8 if e5m2 and something higher (16 or 32) for e4m3. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor jon-chuang commented Aug 30, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . If you can't fix it this way, you can mark that specific parameters for the test which fail (model type, dtype) as pytest.mark.skip("flakey test, see: #XXX") or create an issue and link to that and I can fix it in another PR. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . noooop mentioned this pull request Aug 31, 2024 [Bug]: flakey test found in #7874 #8051 Closed 1 task Copy link Contributor Author noooop commented Aug 31, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . # We use float32 for probabilities and log probabilities. In Sampler float32 precise is is high enough. Can NUM_LOGPROBS be enlarged to achieve the original testing purpose? I choose to skip this test and let professionals solve it. @jon-chuang #8051 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . noooop force-pushed the main branch
from 95aab1c to 57dc722 Compare August 31, 2024 04:33 noooop force-pushed the main branch
from 57dc722 to a05dd0b Compare August 31, 2024 04:36 flakey test, see: vllm-project#7874 vllm-project#8051 ad5f1db noooop force-pushed the main branch
from a05dd0b to ad5f1db Compare August 31, 2024 04:48 Copy link Contributor Author noooop commented Aug 31, 2024 @youkaichao @rkooo567 @comaniac Is it ready to launch? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author noooop commented Aug 31, 2024 /ready All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 31, 2024 Copy link Member youkaichao commented Sep 2, 2024 thanks for the contribution! I triggered the test again, as long as the tests pass, we can merge it. 👍 1 noooop reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details youkaichao merged commit 6e36f4f into vllm-project : main Sep 2, 2024 45 of 47 checks passed Uh oh! There was an error while loading. Please reload this page . gongdao123 pushed a commit
to bartsolutions/vllm
that referenced
this pull request Oct 18, 2024 improve chunked prefill performance … 100fcc9 [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. ( vllm-project#7874 ) Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 improve chunked prefill performance … 5a69ab1 [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. ( vllm-project#7874 )
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 improve chunked prefill performance … 9e6de1c [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. ( vllm-project#7874 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:01
|
ce6bf3a2cff4860c5661cac2280e0a28bedb6440
|
https://github.com/vllm-project/vllm/pull/7898
| false | true | false | true |
PERF: Throughput, Throughput, throughput | TEST: test, test, CI
|
Copy link Member youkaichao commented Aug 27, 2024 We have 2 types of runtime overhead in TPU: Dynamo guard evaluation overhead, chooses which code to run torch xla overhead, convert function input to xla input We can manage to remove the first one, via adding one layer dispatcher above Dynamo. I did systematic measurement this time, and find that: pure xla execution takes 7ms for every decoding step combining both overhead ( the current main branch), it takes 8.2ms for every decoding step removing Dynamo overhead (this PR), it takes 8.0ms for every decoding step It turns out the xla overhead is the main overhead. But I think it is still worthwhile to get rid of the Dynamo overhead before we remove the xla overhead. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions custom dispatch 248d4db Copy link github-actions bot commented Aug 27, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . refine 8f4ed39 Copy link Member Author youkaichao commented Aug 27, 2024 NOTE: my test code is still https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_tpu.py All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author youkaichao commented Aug 27, 2024 this looks surprisingly effective. I run python benchmarks/benchmark_throughput.py --input-len 256 --output-len 256 --model google/gemma-2b main: Throughput: 16.70 requests/s, 8549.39 tokens/s this PR: Throughput: 17.39 requests/s, 8902.73 tokens/s it counts as 4% throughput improvement All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . youkaichao added 17 commits August 27, 2024 00:57 add wrapper 2d8b20a update 9f752fd add wrapper test 4be616a fix 026a525 update wrapper 7a1dd38 separate tests 1f0f148 add tests 7531186 update tests 31e9e7b multi wrappers ace38e2 use wrapper 31a9e06 fix 0a349f5 fix 12cb164 more explanation f483660 add tests ec52afc add package fabce9a update tests b9fff4c add tests f5019fc youkaichao requested a review
from WoosukKwon August 27, 2024 18:12 add init e3692ba WoosukKwon approved these changes Aug 28, 2024 View reviewed changes vllm/worker/tpu_model_runner.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/worker/tpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/compilation/wrapper.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . youkaichao and others added 3 commits August 28, 2024 15:26 Update vllm/worker/tpu_model_runner.py … 746036c Co-authored-by: Woosuk Kwon <[email protected]> Merge branch 'main' into custom_dispatch 80ce2bd fix args a0bac86 youkaichao enabled auto-merge (squash) August 28, 2024 23:09 youkaichao disabled auto-merge August 28, 2024 23:09 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 28, 2024 Hide details View details youkaichao merged commit ce6bf3a into vllm-project : main Aug 28, 2024 26 of 31 checks passed Uh oh! There was an error while loading. Please reload this page . youkaichao deleted the custom_dispatch branch August 28, 2024 23:10 youkaichao mentioned this pull request Aug 28, 2024 [torch.compile] remove reset #7975 Merged Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [torch.compile] avoid Dynamo guard evaluation overhead ( vllm-project#… … 7da14a0 …7898 )
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [torch.compile] avoid Dynamo guard evaluation overhead ( vllm-project#… … 74301d6 …7898 )
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:04
|
e3580537a41a46b0f3cd750b86b633c1857a8c90
|
https://github.com/vllm-project/vllm/pull/7753
| false | true | false | true |
PERF: Throughput, Throughput, tok/s | TEST: test, test, test
|
Copy link Collaborator comaniac commented Aug 21, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Reference PRs: #6144 , #6819 Make @sighingnow and @Juelianqvq as co-authors of this PR. This PR supports prefix caching and chunked prefill to be enabled together. Different from the reference PRs, this PR simplifies the logic of dealing with partial blocks (thanks to @rkooo567 for the suggestion). Here is the execution flow: In scheduler, when determining the new tokens to be scheduled and both chunked prefill and prefix caching are enabled. If all uncomputed tokens can be scheduled (i.e., the last chunk of the prompt), then schedule them all. Otherwise, we always schedule the number of tokens that is divisible by the block size. For example, if the remaining budget is 133 tokens and the block size is 16, we will only schedule (133//16)*16=112 tokens. Although this approach wastes some token budget, it makes the following process straightforward. In prepare input, if all scheduled tokens are cached, we only compute the last block. Note that: We cannot skip all blocks at this moment because model runner doesn't support this case. Currently when block manager determines prefix cache blocks, it will also skip the last block due to the same reason (e.g., https://github.com/vllm-project/vllm/blob/main/vllm/core/block/prefix_caching_block.py#L556 ). This can be improved in the future if we move prefix caching to scheduler so that this case won't happen anymore. Since we guarantee the scheduled tokens are divisible by block size, we don't need to consider partial blocks in prepare input. A test case for functional correctness is also added. Throughput benchmarking results: Model: neuralmagic/Meta-Llama-3-8B-Instruct-FP8 GPU: 1xL4 Number of requests: 600 Average prompt length: 637 (shared prefix ~180, cache hit rate ~20%) Max output length: 200 Block manager v1 Chunked prefill size 2048 Branch ChunkedPrefill PrefixCaching Elapsed Time (s) Throughput (tok/s) main x v 154.37 3631.2 main v x 173.84 3215.1 PR x v 155.88 3596.2 PR v x 174.18 3298.8 PR v v 142.81 3929.7 cc @rkooo567 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 6 rkooo567, Juelianqvq, sam-h-bean, zachzzc, ldmiao, and ywang96 reacted with thumbs up emoji 🚀 5 cadedaniel, mgoin, sam-h-bean, ywang96, and hibukipanim reacted with rocket emoji All reactions 👍 6 reactions 🚀 5 reactions Copy link github-actions bot commented Aug 21, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac requested review from zhuohan123 and rkooo567 August 21, 2024 19:22 comaniac changed the title Prefix cache chunked prefill [Performance] Enable chunked prefill and prefix caching together Aug 21, 2024 Copy link Collaborator rkooo567 commented Aug 21, 2024 result seems very good!! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 21, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Hi @comaniac @rkooo567 I would like you folks to notice my last commit on #6144 ( a043643 ). Without it, this PR is still incorrect, and the error can be reproduced with even a single request: request 1: length 120 chunked prefill enabled prefix caching enabled max_num_batched_tokens = 64, max_num_seqs = 64 You will find that with this PR, at the first round, tokens[0:64] is prefilled, at the second round, tokens[96:119] is prefilled, and the tokens between 64 and 96 are skipped. This is because the num_computed_blocks is incorrectly updated as the whole block table for prompt tokens, rather than tokens that are prefilled at the first round. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 21, 2024 IIUC, this PR already guarantees every sequence will have at least one block to compute even it fully hits the cache, so it shouldn't trigger the issue you mentioned? If I missed anything, can you modify the unit test added in this PR so that the problem can be exposed and tested? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 21, 2024 IIUC, this PR already guarantees every sequence will have at least one block to compute even it fully hits the cache, so it shouldn't trigger the issue you mentioned? It is not about fully matched. In the case commented above, there are only 1 request, and the prefill are spited to [0:64] and [64:120], and the second part is treated as prefix matched as the computed_block_nums are updated to [0,1,2,3,4,5,6,7] after the first chunk prefill. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 21, 2024 IIUC, this PR already guarantees every sequence will have at least one block to compute even it fully hits the cache, so it shouldn't trigger the issue you mentioned? If I missed anything, can you modify the unit test added in this PR so that the problem can be exposed and tested? The test case in this PR didn't fail just because the max_num_batched_tokens (14) is smaller than the block size (16). Try larger value like 64. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 21, 2024 IIUC, this PR already guarantees every sequence will have at least one block to compute even it fully hits the cache, so it shouldn't trigger the issue you mentioned? If I missed anything, can you modify the unit test added in this PR so that the problem can be exposed and tested? The test case in this PR didn't fail just because the max_num_batched_tokens (14) is smaller than the block size (16). Try larger value like 64. The size 14 is used to test invalid size. The actual size being tested in this case is 16. Meanwhile, I tried all 16, 32 and 64 but none of them failed. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 21, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . IIUC, this PR already guarantees every sequence will have at least one block to compute even it fully hits the cache, so it shouldn't trigger the issue you mentioned? If I missed anything, can you modify the unit test added in this PR so that the problem can be exposed and tested? The test case in this PR didn't fail just because the max_num_batched_tokens (14) is smaller than the block size (16). Try larger value like 64. The size 14 is used to test invalid size. The actual size being tested in this case is 16. Meanwhile, I tried all 16, 32 and 64 but none of them failed. With max_num_batched_tokens=64, you need sequence length at least to 64 + 2 * block_size to reproduce the problem, 41 is not enough. max_num_batched_tokens=16/32 cannot reproduce the issue, too, as the second block are guaranteed to be recomputed in this PR. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 21, 2024 Ok I could reproduce the issue you pointed out. It actually only happens in block manager v1 as block manager v2 doesn't use this mechanism to mark computed blocks. This may also explain the too good speedup I got. I'll apply your fix in this PR and try to make the test cover this case. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 21, 2024 @sighingnow I applied your commit with some modifications. The test is also changed so that it will fail without fixing the issue in block manager v1. PTAL. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 22, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @sighingnow I applied your commit with some modifications. The test is also changed so that it will fail without fixing the issue in block manager v1. PTAL. Thanks! LGTM. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . rkooo567 reviewed Aug 22, 2024 View reviewed changes Copy link Collaborator rkooo567 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good. One question is should we just make scheduler handle prefix caching + chunked prefill correctly and make logics in model_runner simplified? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager_v1.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/scheduler.py Outdated raise ValueError("When enabling chunked prefill and " "prefix caching, max_num_batched_tokens " "(chunk size) must be dividable by " "block size, but got " Copy link Collaborator rkooo567 Aug 22, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment can you also print chunk size and block size along with budget.token_budget % block_size ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author comaniac Aug 23, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It now looks like ValueError: When enabling chunked prefill and prefix caching, max_num_batched_tokens (chunk size) must be dividable by block size, but got chunk_size (30) % block_size (16) = 14 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/scheduler.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/scheduler.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/worker/model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 22, 2024 @sighingnow I applied your commit with some modifications. The test is also changed so that it will fail without fixing the issue in block manager v1. PTAL. Will the fix for v2 block manager be addressed by this PR as well? The behavior of v2-block-manager looks quite strange and I'm wondering if #7619 is related. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 22, 2024 @sighingnow I applied your commit with some modifications. The test is also changed so that it will fail without fixing the issue in block manager v1. PTAL. Will the fix for v2 block manager be addressed by this PR as well? The behavior of v2-block-manager looks quite strange and I'm wondering if #7619 is related. I have a fix in my local but it would be a separate PR ❤️ 1 sighingnow reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link JaheimLee commented Aug 22, 2024 Is it for flash-attn backend only or for all backends? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 22, 2024 Is it for flash-attn backend only or for all backends? I've tested flash-attn and FlashInfer so at least these 2 backends work. Need to test xformers later. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Juelianqvq commented Aug 23, 2024 I've tested flash-attn and FlashInfer so at least these 2 backends work. Need to test xformers later. @comaniac https://github.com/vllm-project/vllm/blob/main/vllm/attention/backends/flashinfer.py#L360 Really supported here? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 23, 2024 I've tested flash-attn and FlashInfer so at least these 2 backends work. Need to test xformers later. @comaniac https://github.com/vllm-project/vllm/blob/main/vllm/attention/backends/flashinfer.py#L360 Really supported here? Yeah I noticed that too so not fully sure what's going on. Will find some time tomorrow for it. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 23, 2024 Updates: More tests are added. Chunk prefill does only support flash attention backend for now. My local test passed because it didn't schedule prefill and decode in the same batch. However, there shouldn't be a blocker for FlashInfer to support chunked prefill, so we should add this support in a follow-up PR. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator sighingnow commented Aug 24, 2024 Updates: More tests are added. Chunk prefill does only support flash attention backend for now. My local test passed because it didn't schedule prefill and decode in the same batch. However, there shouldn't be a blocker for FlashInfer to support chunked prefill, so we should add this support in a follow-up PR. May I know more why you choose to recompute the whole block if it is fully matched? Only recompute the last token is enough and requires no changes in scheduler, and it would be a bit more efficient. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 24, 2024 You're right it would be a bit more efficient to compute only the last token. Meanwhile I found that it might not be that hard to deal with prefix matching in scheduler so that this case would never happen in model runner. I'll give it a try All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac and others added 6 commits August 26, 2024 11:25 done d893717 test 1f16ece Add co-authors … 94315d4 Co-authored-by: Tao He <[email protected]>
Co-authored-by: Juelianqvq <[email protected]> final 1daa758 fix 79563bf clean up f1e9548 comaniac added 2 commits August 26, 2024 11:26 comments and tests d57951f computel ast 324fcec comaniac force-pushed the prefix-cache-chunked-prefill branch
from b305e0d to 324fcec Compare August 26, 2024 19:59 Copy link Collaborator Author comaniac commented Aug 26, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @sighingnow changed to re-compute only the last token. PTAL. @rkooo567 I've tried to move prefix caching to scheduler and it's actually easy for default scheduler. For chunked prefill, we have to refactor the scheduler (e.g., .schedule() , ._schedule_prefill() , .get_new_tokens() ) and block manager (e.g., .can_allocate() ). Since we have to be careful with this refactor and it can be decoupled from this PR, I'll put it in a follow-up PR tracked by #7883 ❤️ 2 sighingnow and rkooo567 reacted with heart emoji All reactions ❤️ 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 26, 2024 rkooo567 approved these changes Aug 28, 2024 View reviewed changes Copy link Collaborator rkooo567 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Generally looks good. I'd like to actually also add a warning if the block size is big and prefix caching + CP is enabled (because it can waste a lot of tokens). Maybe if block_size >32, we can print a warning? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/core/test_block_manager.py @@ -595,3 +595,43 @@ def test_sliding_window_multi_seq(): # assert all blocks are free now assert block_manager.get_num_free_gpu_blocks() == num_gpu_blocks def test_mark_blocks_as_computed_with_prefix_cache_and_chunked_prefill(): Copy link Collaborator rkooo567 Aug 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment do we have corresponding test in v2? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author comaniac Aug 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment We don't need to test v2 because v2 automatically mark touched blocks as computed. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/scheduler.py # to avoid partial block matching. block_size = self.cache_config.block_size reminder = budget.token_budget % block_size if reminder != 0: Copy link Collaborator rkooo567 Aug 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Btw, should we raise this exception at the engine start time instead and just add assert here? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author comaniac Aug 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I feel we could just raise here for now because this constraint should be able to be removed once we refactor the schedule to consider prefix caching. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author comaniac commented Aug 28, 2024 Generally looks good. I'd like to actually also add a warning if the block size is big and prefix caching + CP is enabled (because it can waste a lot of tokens). Maybe if block_size >32, we can print a warning? Sure I'll add the warning in a follow-up PR. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details comaniac merged commit e358053 into vllm-project : main Aug 28, 2024 54 checks passed Uh oh! There was an error while loading. Please reload this page . comaniac deleted the prefix-cache-chunked-prefill branch August 28, 2024 07:36 Copy link Contributor Juelianqvq commented Aug 28, 2024 Since this PR has been merged, both #6144 and #6819 can be closed, and are you willing to add me and @sighingnow as the co-authors? @comaniac All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Aug 28, 2024 Ah I intended to do that. Actually I put you two as co-authors in one commit of this PR and I thought it should work when the PR is merged but somehow it didn't...let me try to figure out how to fix that. Also cc @simon-mo All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . kushanam pushed a commit
to kushanam/vllm
that referenced
this pull request Aug 28, 2024 [Performance] Enable chunked prefill and prefix caching together ( vll… … 1fcd098 …m-project#7753 ) kushanam pushed a commit
to kushanam/vllm
that referenced
this pull request Aug 28, 2024 [Performance] Enable chunked prefill and prefix caching together ( vll… … 2497d44 …m-project#7753 ) Copy link Collaborator sighingnow commented Aug 29, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . To whom it may concern: after this PR there are still occasional crashes when prefix caching and chunked prefill are enabled at the same time on Nvidia GPUs (inside the flash_attn_varlen_func function in the prefix-enabled attention branch). I investigated the kernel input and find nothing wrong and cannot reproduce it when run the kernel standalone with the pickle saved inputs. I think there are still overflow bugs inside vllm-flash-attention, set the block_size to 256 could fix the issue and the crash disappeared under high pressure. 👍 3 elfiegg, comaniac, and ashgold reacted with thumbs up emoji All reactions 👍 3 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . flozi00 mentioned this pull request Sep 3, 2024 [WIP] Multi Step Chunked Prefill - Prefill Steps #8001 Closed comaniac added a commit
to comaniac/vllm
that referenced
this pull request Sep 3, 2024 Add co-authors of vllm-project#7753 … f13313c Co-authored-by: Tao He <[email protected]>
Co-authored-by: Juelianqvq <[email protected]> comaniac mentioned this pull request Sep 3, 2024 [Performance] Enable chunked prefill and prefix caching together #8120 Merged Copy link ashgold commented Sep 3, 2024 To whom it may concern: after this PR there are still occasional crashes when prefix caching and chunked prefill are enabled at the same time on Nvidia GPUs (inside the flash_attn_varlen_func function in the prefix-enabled attention branch). I investigated the kernel input and find nothing wrong and cannot reproduce it when run the kernel standalone with the pickle saved outputs. I think there are still overflow bugs inside vllm-flash-attention, set the block_size to 256 could fix the issue and the crash disappeared under high pressure. This looks like a serious bug that needs to be fixed before it can go to production. Thanks for sharing the workaround solution as well. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member hmellor commented Sep 10, 2024 If you are using a model with max_model_len > 32K (i.e. Llama 3.1) then chunked prefill is enabled by default. However, this PR leaves the and not self.enable_prefix_caching condition in this automatic enabling of chunked prefill. This means that a user relying on the automatic enabling of chunked prefill might not notice it becoming disabled when they enable prefix caching. vllm/vllm/engine/arg_utils.py Lines 866 to 891
in da1a844 if self . enable_chunked_prefill is None : # If not explicitly set, enable chunked prefill by default for # long context (> 32K) models. This is to avoid OOM errors in the # initial memory profiling phase. if use_long_context : is_gpu = device_config . device_type == "cuda" use_sliding_window = ( model_config . get_sliding_window () is not None ) use_spec_decode = self . speculative_model is not None has_seqlen_agnostic_layers = ( model_config . contains_seqlen_agnostic_layers ( parallel_config )) if ( is_gpu and not use_sliding_window and not use_spec_decode and not self . enable_lora and not self . enable_prompt_adapter and not self . enable_prefix_caching and not has_seqlen_agnostic_layers ): self . enable_chunked_prefill = True logger . warning ( "Chunked prefill is enabled by default for models with " "max_model_len > 32K. Currently, chunked prefill might " "not work with some features or models. If you " "encounter any issues, please disable chunked prefill " "by setting --enable-chunked-prefill=False." ) if self . enable_chunked_prefill is None : self . enable_chunked_prefill = False cc @comaniac All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author comaniac commented Sep 10, 2024 Good point. I'll file another PR to fix it. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac mentioned this pull request Sep 10, 2024 [MISC] Keep chunked prefill enabled by default with long context when prefix caching is enabled #8342 Merged Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Performance] Enable chunked prefill and prefix caching together ( vll… … 4b6fa2b …m-project#7753 )
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Performance] Enable chunked prefill and prefix caching together ( vll… … 49603e3 …m-project#7753 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:09
|
2deb029d115dadd012ce5ea70487a207cb025493
|
https://github.com/vllm-project/vllm/pull/7822
| false | true | false | true |
PERF: throughput | TEST: test, test, CI
|
Copy link Collaborator comaniac commented Aug 23, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Closes #7619 With the investigation in #7619 , the root cause of block manager v2 low throughput with prefix caching is that block manager v2 doesn't mark prefix cache hit blocks as computed right after scheduling a batch. Specifically, the life cycle of a prefix cache block is as follows: The block is allocated by the first sequence of a batch. At this moment it will be added to "cached blocks", but won't be marked as computed; otherwise the rest sequences in the same batch will skip the computation of this block and result in incorrect output. When the batch of sequence is finished (prefill+decode), the blocks are freed and added to the evictor. When the sequence of a following batch allocates the same block, it will be activated from the evictor and marked as computed. Here is a simple illustration. Note that we assume each sequence is in different batch. seq 1: [allocate-block-uncomputed] -- [prefill] --[decode1] -- ... -- [decodeN] -- [free-block]
seq 2: [allocate-block-uncomputed] -- ...
...
seq N: [allocate-block-computed] -- ... Meanwhile, block manager v1 marks the block as computed right after the prefill is scheduled: seq 1: [allocate-block-uncomputed] -- [prefill] --[decode1] -- ... -- [decodeN] -- [free-block]
seq 2: [allocate-block-computed] -- ...
... This PR fixes this issue by marking allocated blocks as touched, and let scheduler mark them as computed to achieve the same behavior of block manager v1. Benchmark on L4 Command python3 benchmarks/benchmark_prefix_caching.py \
--model neuralmagic/Meta-Llama-3-8B-Instruct-FP8 \
--output-len 200 \
--enable-prefix-caching \
[--use-v2-block-manager] Branch Block Manager Warmup (s) Processed (s) main v1 14.5 13.4 main v2 23.6 13.4 PR v1 14.5 13.3 PR v2 14.4 13.3 cc @cadedaniel @rkooo567 @Yard1 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Aug 23, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 reviewed Aug 23, 2024 View reviewed changes Copy link Collaborator Yard1 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, some comments Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block/prefix_caching_block.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block/prefix_caching_block.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . comaniac added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 23, 2024 comaniac added 3 commits August 26, 2024 09:29 done w/o test 5daf36c add test 6d8a610 use set 020ac13 comaniac force-pushed the fix-v2-prefix-cache branch
from fd9c7c7 to 020ac13 Compare August 26, 2024 16:30 Yard1 approved these changes Aug 26, 2024 View reviewed changes Hide details View details comaniac merged commit 2deb029 into vllm-project : main Aug 26, 2024 42 checks passed Uh oh! There was an error while loading. Please reload this page . comaniac deleted the fix-v2-prefix-cache branch August 26, 2024 18:24 Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Performance][BlockManagerV2] Mark prefix cache block as computed aft… … ed30706 …er schedule ( vllm-project#7822 )
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Performance][BlockManagerV2] Mark prefix cache block as computed aft… … 9e9c3a0 …er schedule ( vllm-project#7822 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:12
|
fc7b8d1eefcbe837a56b7c080509417fe5167e6c
|
https://github.com/vllm-project/vllm/pull/7364
| false | false | false | true |
TEST: CI, CI, CI
|
Copy link Collaborator alexm-redhat commented Aug 9, 2024 This PR is a followup for #7162 to address leftover review comments and add some more small improvements. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 youkaichao reacted with thumbs up emoji All reactions 👍 1 reaction review comments from Kaichao and hengxinCheung acb7235 Copy link github-actions bot commented Aug 9, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . alexm-redhat mentioned this pull request Aug 9, 2024 [Performance] Optimize e2e overheads: Reduce python allocations #7162 Merged njhill reviewed Aug 9, 2024 View reviewed changes vllm/core/block_manager_v1.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Nick's comment 6297040 njhill approved these changes Aug 9, 2024 View reviewed changes Copy link Collaborator Author alexm-redhat commented Aug 9, 2024 /ready All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 9, 2024 comaniac enabled auto-merge (squash) August 9, 2024 15:47 Hide details View details comaniac merged commit fc7b8d1 into vllm-project : main Aug 9, 2024 58 of 60 checks passed Uh oh! There was an error while loading. Please reload this page . Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Performance] e2e overheads reduction: Small followup diff ( vllm-proj… … a1ff013 …ect#7364 )
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Performance] e2e overheads reduction: Small followup diff ( vllm-proj… … 87c9e4c …ect#7364 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:14
|
660470e5a36b8e52083615ad7c85e9b4fd4c72ce
|
https://github.com/vllm-project/vllm/pull/7193
| true | false | false | true |
LM_EVAL: mmlu, mmlu | TEST: Test, test, CI
|
Copy link Contributor xiaobochen123 commented Aug 6, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Using the AutoPrefixCache, the block_manager_v2 performs worse than v1. llama-3.1-8b, H800 Test 3510 cases from mmlu dataset llm = LLM(
model=path,
tensor_parallel_size=1,
trust_remote_code=True,
gpu_memory_utilization=0.8,
max_num_seqs=512,
enable_prefix_caching=True,
use_v2_block_manager=XXXX,
)
sampling_params = SamplingParams(temperature=1.0, max_tokens=1)
mmlu_dataset = [...] # 3510 cases from mmlu
outputs = llm.generate(
sampling_params=sampling_params,
prompt_token_ids=mmlu_dataset,
) The self.free_table in evictor_v2::LRUEvictor is OrderedDict class that remembers the order in which keys were first inserted. The larger timestamps will be at the end. The reason V2 slower than V1 , is that V2 will go through all the free_table, in evict. V2 has the 'update', It breaks the order. So we can move the block to the end when update. That can keep the lowest timestamp at the start. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 4 youkaichao, jon-chuang, mgoin, and shixianc reacted with thumbs up emoji All reactions 👍 4 reactions Copy link github-actions bot commented Aug 6, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 6, 2024 thanks for the contribution! cc @cadedaniel @zhuohan123 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . xiaobochen123 force-pushed the opt_evictor branch
from 52379a2 to 8f387b2 Compare August 6, 2024 08:04 opt evictor-v2 performance 0856f66 xiaobochen123 force-pushed the opt_evictor branch
from 8f387b2 to 0856f66 Compare August 6, 2024 08:19 Yard1 mentioned this pull request Aug 6, 2024 [Performance][Core] Optimize the performance of evictor v1 and v2 by applying a priority queue and lazy deletion #7209 Merged cadedaniel approved these changes Aug 6, 2024 View reviewed changes Copy link Collaborator cadedaniel commented Aug 6, 2024 Looks good to me, although the NeuralMagic folks have better understanding of the prefix caching paths. cc @robertgshaw2-neuralmagic All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Aug 6, 2024 Looks pretty reasonable to me, and the test also passed. I will go ahead to merge this. thanks again @xiaobochen123 for the contribution! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details youkaichao merged commit 660470e into vllm-project : main Aug 6, 2024 28 checks passed Uh oh! There was an error while loading. Please reload this page . comaniac mentioned this pull request Aug 16, 2024 [MISC] Add prefix cache hit rate to metrics #7606 Merged Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Optimize evictor-v2 performance ( vllm-project#7193 ) … 1ed56fb Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Core] Optimize evictor-v2 performance ( vllm-project#7193 ) … ba80305 Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:19
|
6ce01f30667bbae33f112152e07a3b66b841078f
|
https://github.com/vllm-project/vllm/pull/7051
| false | true | false | true |
PERF: Throughput, Throughput | TEST: CI, CI, CI
|
Copy link Collaborator WoosukKwon commented Aug 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes the overhead of seq_group.get_seqs() , which was reported by @youkaichao . The solution is simple: We maintain seqs: List[Sequence] in addition to seqs_dict: Dict[int, Sequence] , and use seqs for all get_seqs calls. This leads to small performance boost (llama3 8B, 1xH100) Before: Throughput: 23.98 requests/s, 9914.65 tokens/s After: Throughput: 24.52 requests/s, 10138.92 tokens/s Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 2 njhill and mgoin reacted with rocket emoji All reactions 🚀 2 reactions WoosukKwon added 2 commits August 1, 2024 16:13 [Performance] Optimize get_seqs 6aae340 yapf 1f5b63d WoosukKwon requested a review
from youkaichao August 1, 2024 23:19 Copy link github-actions bot commented Aug 1, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 1, 2024 njhill approved these changes Aug 1, 2024 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment lgtm! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/sequence.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/sequence.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Address review 4d3d3b9 youkaichao reviewed Aug 2, 2024 View reviewed changes vllm/sequence.py @@ -458,25 +459,24 @@ def __init__( self.prompt_adapter_request = prompt_adapter_request self.encoder_seq = encoder_seq self.trace_headers = trace_headers self._first_seq = next(iter(self.seqs_dict.values())) Copy link Member youkaichao Aug 2, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think you can still keep self._first_seq = seqs[0] , and use it to replace self.seqs[0] Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author WoosukKwon Aug 2, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think it doesn't hurt much to use seqs[0] without caching it? _first_seq was introduced to avoid the overhead of retrieving a value from the dictionary. I believe the overhead of seqs[0] will be negligible even if it's Python. Also, since the sequence can be removed, I feel more comfortable with self.seqs[0] than caching the sequence. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions youkaichao reviewed Aug 2, 2024 View reviewed changes Copy link Member youkaichao left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Glad to see it helps performance. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details WoosukKwon merged commit 6ce01f3 into main Aug 2, 2024 60 of 63 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the optimize-get-seqs branch August 2, 2024 01:29 youkaichao mentioned this pull request Aug 4, 2024 [Performance]: From SequenceGroup-native code to Sequence-native code #7116 Closed dtrifiro mentioned this pull request Aug 5, 2024 Sync with [email protected] opendatahub-io/vllm#120 Closed mawong-amd mentioned this pull request Sep 3, 2024 Reconcile merge differences [fix Custom All Reduce; remove Torchrun & Cython] ROCm/vllm#163 Closed Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Performance] Optimize get_seqs ( vllm-project#7051 ) … a02da52 Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Performance] Optimize get_seqs ( vllm-project#7051 ) … 2f46dfc Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:23
|
89a84b0bb7b30706a02836234a94493ea8f780bf
|
https://github.com/vllm-project/vllm/pull/6779
| false | true | true | true |
PERF: throughput, latency, optimization | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, CI
|
Copy link Contributor peng1999 commented Jul 25, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Using array.array in SequenceData greatly improves performance of make_tensor_with_pad in Sampler. Micro-benchmark using 1024 input length and 2048 batch size shows a great latency improvment (79ms to 22ms): Before: After: End-to-end test on qwen-1.5-0.5b model also shows improvement on throughput: main: Processed prompts: 100%|███| 2048/2048 [01:22<00:00, 24.76it/s, est. speed input: 25352.26 toks/s, output: 3165.44 toks/s] This PR: Processed prompts: 100%|███| 2048/2048 [01:09<00:00, 29.44it/s, est. speed input: 30150.97 toks/s, output: 3764.60 toks/s] BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 8 casper-hansen, mgoin, robertgshaw2-redhat, LiuXiaoxuanPKU, comaniac, akai-shuuichi, Xu-Chen, and Shang-QY reacted with rocket emoji All reactions 🚀 8 reactions Use array to speedup padding d2ab931 Copy link github-actions bot commented Jul 25, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . reformat code d9c591e peng1999 changed the title Use array to speedup padding [Core] Use array to speedup padding Jul 25, 2024 Copy link Contributor casper-hansen commented Jul 25, 2024 Nice to see an 18% speedup from this optimization. Is it mainly for small models? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin reviewed Jul 25, 2024 View reviewed changes vllm/model_executor/sampling_metadata.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 25, 2024 Copy link Contributor Author peng1999 commented Jul 25, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Is it mainly for small models? Yes. This PR is for small models and large batch sizes. The from_sampling_metadata function, optimized by this PR, primarily runs on the CPU and is independent of logists. Therefore, it can overlap with the GPU work of model inference. It will only be on the critical path if its execution time exceeds that of model inference, which occurs with smaller models. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin approved these changes Jul 25, 2024 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM any concerns @youkaichao ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor daquexian commented Jul 25, 2024 Great PR! Would you mind sharing what tool you used to get this image, is it nsight system? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . youkaichao reviewed Jul 25, 2024 View reviewed changes vllm/sequence.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author peng1999 commented Jul 26, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Great PR! Would you mind sharing what tool you used to get this image, is it nsight system? Yes. The blue spans are recorded using NVTX. 👍 1 daquexian reacted with thumbs up emoji ❤️ 1 daquexian reacted with heart emoji All reactions 👍 1 reaction ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . youkaichao approved these changes Jul 26, 2024 View reviewed changes Copy link Member youkaichao left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the great job! Please merge the latest main to pass the tests. I once tried to replace the whole prompt/output tokens to numpy array, but it involves changing too much code, so I gave it up due to limited bandwidth. It's good to see this speedup with a self-contained change. cc @alexm-neuralmagic if you are planning to change the underlying data structure in block managers. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Merge remote-tracking branch 'upstream/main' into opt-array fb63840 Hide details View details youkaichao merged commit 89a84b0 into vllm-project : main Jul 26, 2024 72 checks passed Uh oh! There was an error while loading. Please reload this page . peng1999 deleted the opt-array branch July 30, 2024 09:58 dtrifiro mentioned this pull request Aug 5, 2024 Sync with [email protected] opendatahub-io/vllm#120 Closed Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Use array to speedup padding ( vllm-project#6779 ) … 62afef0 Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Core] Use array to speedup padding ( vllm-project#6779 ) … 3f840ef Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:26
|
9ed82e7074a18e25680ab106fc846364ad97bc00
|
https://github.com/vllm-project/vllm/pull/6520
| false | true | true | true |
PERF: profiling | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, test
|
Copy link Collaborator Yard1 commented Jul 17, 2024 Small performance improvements in different components, discovered during profiling. Look at commit list for details! PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Yard1 added 4 commits July 17, 2024 14:11 Cache importlib in ModelRegistry c5e350b Fast return for get_common_computed_block_ids f269738 chunk_list into an iterator a36da80 Cache _first_seq in SequenceGroup 47ce44f Copy link github-actions bot commented Jul 17, 2024 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only trigger fastcheck CI to run, which consists only a small and essential subset of tests to quickly catch errors with the flexibility to run extra individual tests on top (you can do this by unblocking test steps in the Buildkite run). Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well. To run full CI, you can do one of these: Comment /ready on the PR Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 requested a review
from njhill July 17, 2024 21:15 Copy link Collaborator Author Yard1 commented Jul 17, 2024 /ready All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 17, 2024 Yard1 added 3 commits July 17, 2024 14:16 Lint 69d73a3 Lint bbef0e1 Lint 34c30df comaniac approved these changes Jul 17, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Yard1 enabled auto-merge (squash) July 17, 2024 21:52 rkooo567 approved these changes Jul 17, 2024 View reviewed changes Yard1 added 4 commits July 17, 2024 15:06 Fix test 6b45138 Fix f27f653 Lint 31e4c76 Fix dd897db cadedaniel approved these changes Jul 18, 2024 View reviewed changes Copy link Collaborator cadedaniel left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Some test failure Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member DarkLight1337 commented Jul 18, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . It looks like you are calling list after chunk_list in each case. Wouldn't that defeat the point of making it a generator function? Edit: Never mind, I see it being used in https://github.com/vllm-project/vllm/blob/main/vllm/core/block/block_table.py#L265 To make the code a bit cleaner (by reducing the number of list calls), I suggest adding a new generator function iter_chunk_list to be used in for loops (e.g. in the above case), while keeping the existing semantics of chunk_list . All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin approved these changes Jul 18, 2024 View reviewed changes Fix ebf4794 Yard1 disabled auto-merge July 18, 2024 20:13 Yard1 enabled auto-merge (squash) July 18, 2024 20:13 Yard1 added 2 commits July 18, 2024 20:17 Merge branch 'upstream_main' into small_improvements 5eddc37 Merge branch 'upstream_main' into small_improvements e39f05c simon-mo disabled auto-merge July 19, 2024 19:10 simon-mo merged commit 9ed82e7 into main Jul 19, 2024 Yard1 deleted the small_improvements branch July 19, 2024 22:17 xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Misc] Small perf improvements ( vllm-project#6520 ) 2660a29 mawong-amd mentioned this pull request Sep 3, 2024 Reconcile merge differences [fix Custom All Reduce; remove Torchrun & Cython] ROCm/vllm#163 Closed Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Misc] Small perf improvements ( vllm-project#6520 ) … b1401bc Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Misc] Small perf improvements ( vllm-project#6520 ) … b0b4998 Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:29
|
3476ed0809ec91a3457da0cb90543133a4f4b519
|
https://github.com/vllm-project/vllm/pull/5602
| true | true | true | true |
LM_EVAL: LM-Eval | PERF: itl, benchmark serving, optimization | SERVING: serving, serving, api server | TEST: test, test, test
|
Copy link Collaborator alexm-redhat commented Jun 17, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes block_manager_v2 python logic to make it comparable to block_manager_v1. The goal is to enable block_manager_v2 by default as part of the spec decode project. The issues optimized are: Python Block object allocations/deallocations are expensive on the hot-path of iterative batching, so a block pool is used to cache block objects. Any string/list duplication should be avoided, especially for token id lists Modified Prefix Caching Block/Allocator to avoid any full traversals of block_ids by using dynamic/incremental style computations Redid the way access all blocks updates timestamps by deferring the actual updates to free(..) of sequences Here is initial performance comparison for both standard and prefix-cache enabled runs: Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 cadedaniel reacted with heart emoji 🚀 5 cadedaniel, mgoin, robertgshaw2-redhat, zhuohan123, and CatherineSue reacted with rocket emoji All reactions ❤️ 1 reaction 🚀 5 reactions robertgshaw2-redhat requested a review
from cadedaniel June 17, 2024 14:58 alexm-redhat marked this pull request as draft June 17, 2024 15:02 cadedaniel reviewed Jun 18, 2024 View reviewed changes vllm/sequence.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager_v2.py block_ids = self.block_tables[seq.seq_id].physical_block_ids assert all(b is not None for b in block_ids) Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment can we keep these in for correctness? can have a flag strict_mode which checks these only in testing / not in production Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I have added "assert block_id is not None" checks into BlockList so the invariant of "assert all(b is not None for b in block_ids)" is always kept. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment awesome Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/naive_block.py Outdated block_size: int, block_id: Optional[int] = None): # Please keep sync with the __init__() # (Calling __init__() directly raises linter errors) Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Can we ignore the linter error instead of duplicating code ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This actually works! Thanks for the suggestion Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated if block_token_ids: blocks.extend( self._allocator.allocate_immutable_group( Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: we can name it allocate_immutable_blocks to reduce new concepts. can also rename the bs=1 path to be allocate_immutable_block so contrast is clear. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good idea, renamed the functions as you proposed. In addition renamed allocate_mutable => allocate_mutable_block Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated Comment on lines 143 to 196 blocks = self. _blocks [self._num_full_slots // self._block_size:] blocks = self. blocks [self._num_full_slots // self._block_size:] Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment is this working? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yeah, this invokes the property blocks(..) and it returns self._blocks.list() Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment oh gotcha Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 3 hidden conversations Load more… vllm/core/block/naive_block.py Outdated token_ids=token_ids, block_size=block_size, block_id=physical_block_id) block.block_pool_id = block_pool_id Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment can we avoid extending the block API for this optimization? we can keep a mapping of object address to block pool id in this class Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yeah, just replaced with simple class member Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 cadedaniel reacted with heart emoji All reactions ❤️ 1 reaction vllm/core/block/naive_block.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block/naive_block.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block/naive_block.py Outdated assert block.block_id is not None self._free_block_id(block.block_id) block.block_id = None def free(self, block: Block) -> None: Copy link Collaborator cadedaniel Jun 17, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: for readability, have this invoke free_block_id instead of _free_block_id Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good catch, modified to invoke free_block_id directly Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/cpu_gpu_block_allocator.py Outdated @@ -149,6 +169,17 @@ def allocate_immutable(self, prev_block: Optional[Block], return self._allocators[device].allocate_immutable( prev_block, token_ids) def free_block_id(self, block: Block) -> None: Copy link Collaborator cadedaniel Jun 18, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I ran out of time to review today. Can you help me understand why we need a new API for this // if there's no way to combine free_block and free_block_id ? ideally we have one way of freeing Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The issue is that inside cow_block_if_not_appendable(..) (in common.py) we decrement ref count for the block_id for this block, and then in the caller, we reuse the same block object while assigning to its block_id the newly allocated block id (self._block_id = (self._allocator.cow_block_if_not_appendable(..)). Same happens in prefix caching inside _free_block_id_for_block(..) when we promote a naive block to the immutable (prefix block) => we call return self._hashless_allocator.free_block_id(block), and at the caller reuse the same block object. Without the block pool a free() was simply setting block.block_id = None, but with block pool, free(..) is actually releasing the block itself, so the second free_block_id() is behaving more similar to block.block_id = None Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 19, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I will try to restructure the code a bit, so that we don't have the free_block_id. Will keep you posted about this issue. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Sounds good. It sounds like my original design should have had more thought on the distinction between Python block objects and block ids themselves. It's OK if we have some suboptimality given that, but also hope you're able to find a simple solution :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 20, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I was able to refactor the code so that only free() is used at all places. I think it is a good change since it forces an explicit free/alloc calls for block objects, and this avoids potential memory leaks (due to previous separation between the block_id and block - currently they are "more fused"). The main things I needed to change is CoW and promote_to_immutable (in prefix-caching). The change moves these two functions to the allocator level (outside of the block itself), since these functions free-and-reallocate a new block, which needs to be updated in the associated lists in block_table.py. To make this cleaner, I added a function in block_table.py that is called "append_token_ids_and_update_allocator". In addition, I redid the free() procedure of prefix-caching since it was a bit complicated, by separating the two main cases there: (1) immutable/promoted block and (2) mutable/hashless block. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 20, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I have verified performance it is even a little better now. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 20, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Also, I have squashed the relevant commits to "refactor code so that only free() is used" so it will be easier to see the changes I did only for this change. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator cadedaniel commented Jun 18, 2024 Great work btw! thanks! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Jun 18, 2024 Updated the PR with performance fixes for prefix-caching block_manager_v2. The table above is updated with new numbers for both standard run and prefix-cache enabled run. 🎉 1 cadedaniel reacted with hooray emoji All reactions 🎉 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Jun 18, 2024 Will start addressing review comments and cleaning up the PR 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 reviewed Jun 18, 2024 View reviewed changes vllm/core/block/block_table.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link hibukipanim commented Jun 19, 2024 As the PR touches prefix caching and preparing v2-block-manager to be default, I was curious to see if the PR might resolve this correctness issue: #5543 (comment) . and you might be interested to know that when running with this branch (commit c1f650fa7f162eb48763d8eeb70081986379f7e1) with --enable-prefix-caching --use-v2-block-manager , the snippet in the linked issue crashes the server with: ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] Engine background task failed ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] Traceback ( most recent call last ): ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 40 , in _raise_exception_on_finish ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] task . result () ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 521 , in run_engine_loop ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] has_requests_in_progress = await asyncio . wait_for ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/lib/python3.10/asyncio/tasks.py" , line 445 , in wait_for ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return fut . result () ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 495 , in engine_step ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] request_outputs = await self . engine . step_async () ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 226 , in step_async ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] output = await self . model_executor . execute_model_async ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/executor/gpu_executor.py" , line 117 , in execute_model_async ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] output = await make_async ( self . driver_worker . execute_model ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/lib/python3.10/concurrent/futures/thread.py" , line 58 , in run ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] result = self . fn ( * self . args , ** self . kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py" , line 115 , in decorate_context ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return func ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/worker/worker.py" , line 272 , in execute_model ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] output = self . model_runner . execute_model ( seq_group_metadata_list , ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py" , line 115 , in decorate_context ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return func ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/worker/model_runner.py" , line 736 , in execute_model ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] hidden_states = model_executable ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . _call_impl ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return forward_call ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 371 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] hidden_states = self . model ( input_ids , positions , kv_caches , ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . _call_impl ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return forward_call ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 288 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] hidden_states , residual = layer ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . _call_impl ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return forward_call ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 227 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] hidden_states = self . self_attn ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . _call_impl ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return forward_call ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 161 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] attn_output = self . attn ( q , k , v , kv_cache , attn_metadata ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . _call_impl ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return forward_call ( * args , ** kwargs ) ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/attention/layer.py" , line 89 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return self . impl . forward ( query , key , value , kv_cache , attn_metadata , ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/workspace/nm-vllm/vllm/attention/backends/flash_attn.py" , line 338 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] flash_attn_varlen_func ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 1099 , in flash_attn_varlen_func ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return FlashAttnVarlenFunc . apply ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py" , line 598 , in apply ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] return super (). apply ( * args , ** kwargs ) # type: ignore[misc] ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 596 , in forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] out , q , k , v , out_padded , softmax_lse , S_dmask , rng_state = _flash_attn_varlen_forward ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 88 , in _flash_attn_varlen_forward ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] out , q , k , v , out_padded , softmax_lse , S_dmask , rng_state = flash_attn_cuda . varlen_fwd ( ERROR 06 - 19 07 : 45 : 58 async_llm_engine . py : 45 ] RuntimeError : out must have shape ( total_q , num_heads , head_size_og ) Exception in callback functools . partial ( < function _raise_exception_on_finish at 0x7f2bc22f4160 > , error_callback = < bound method AsyncLLMEngine . _error_callback of < vllm . engine . async_llm_engine . AsyncLLMEngine object at 0x7f2bb73e0910 >> ) handle : < Handle functools . partial ( < function _raise_exception_on_finish at 0x7f2bc22f4160 > , error_callback = < bound method AsyncLLMEngine . _error_callback of < vllm . engine . async_llm_engine . AsyncLLMEngine object at 0x7f2bb73e0910 >> ) > Traceback ( most recent call last ): File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 40 , in _raise_exception_on_finish task . result () File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 521 , in run_engine_loop has_requests_in_progress = await asyncio . wait_for ( File "/usr/lib/python3.10/asyncio/tasks.py" , line 445 , in wait_for return fut . result () File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 495 , in engine_step request_outputs = await self . engine . step_async () File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 226 , in step_async output = await self . model_executor . execute_model_async ( File "/workspace/nm-vllm/vllm/executor/gpu_executor.py" , line 117 , in execute_model_async output = await make_async ( self . driver_worker . execute_model File "/usr/lib/python3.10/concurrent/futures/thread.py" , line 58 , in run result = self . fn ( * self . args , ** self . kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py" , line 115 , in decorate_context return func ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/worker/worker.py" , line 272 , in execute_model output = self . model_runner . execute_model ( seq_group_metadata_list , File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py" , line 115 , in decorate_context return func ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/worker/model_runner.py" , line 736 , in execute_model hidden_states = model_executable ( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl return self . _call_impl ( * args , ** kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl return forward_call ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 371 , in forward hidden_states = self . model ( input_ids , positions , kv_caches , File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl return self . _call_impl ( * args , ** kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl return forward_call ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 288 , in forward hidden_states , residual = layer ( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl return self . _call_impl ( * args , ** kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl return forward_call ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 227 , in forward hidden_states = self . self_attn ( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl return self . _call_impl ( * args , ** kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl return forward_call ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/model_executor/models/llama.py" , line 161 , in forward attn_output = self . attn ( q , k , v , kv_cache , attn_metadata ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1532 , in _wrapped_call_impl return self . _call_impl ( * args , ** kwargs ) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py" , line 1541 , in _call_impl return forward_call ( * args , ** kwargs ) File "/workspace/nm-vllm/vllm/attention/layer.py" , line 89 , in forward return self . impl . forward ( query , key , value , kv_cache , attn_metadata , File "/workspace/nm-vllm/vllm/attention/backends/flash_attn.py" , line 338 , in forward flash_attn_varlen_func ( File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 1099 , in flash_attn_varlen_func return FlashAttnVarlenFunc . apply ( File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py" , line 598 , in apply return super (). apply ( * args , ** kwargs ) # type: ignore[misc] File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 596 , in forward out , q , k , v , out_padded , softmax_lse , S_dmask , rng_state = _flash_attn_varlen_forward ( File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py" , line 88 , in _flash_attn_varlen_forward out , q , k , v , out_padded , softmax_lse , S_dmask , rng_state = flash_attn_cuda . varlen_fwd ( RuntimeError : out must have shape ( total_q , num_heads , head_size_og ) The above exception was the direct cause of the following exception : Traceback ( most recent call last ): File "uvloop/cbhandles.pyx" , line 63 , in uvloop . loop . Handle . _run File "/workspace/nm-vllm/vllm/engine/async_llm_engine.py" , line 47 , in _raise_exception_on_finish raise AsyncEngineDeadError ( vllm . engine . async_llm_engine . AsyncEngineDeadError : Task finished unexpectedly . This should never happen ! Please open an issue on Github . See stack trace above for the actual cause . INFO 06 - 19 07 : 45 : 58 async_llm_engine . py : 158 ] Aborted request cmpl - 4 ce91102896f49d598ec6313f9629a10 - 0. INFO : 172.17 .0 . 1 : 47640 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error ERROR : Exception in ASGI application All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author alexm-redhat commented Jun 19, 2024 @hibukipanim thanks for pointing this issue, I will check ❤️ 1 hibukipanim reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . alexm-redhat marked this pull request as ready for review June 19, 2024 19:03 alexm-redhat force-pushed the block_manager_v2_perf branch
2 times, most recently
from 0148b6e to e08d643 Compare June 20, 2024 21:34 Yard1 reviewed Jun 21, 2024 View reviewed changes vllm/sequence.py Outdated @property def prompt_token_ids(self) -> List[int]: return self._prompt_token_ids Copy link Collaborator Yard1 Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think we should return a tuple/shallow copy so that this and also output_token_ids doesn't get modified by mistake (and thus bypass _update_cached_all_tokens ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment yeah, what happens if someone modifies the prompt token ids / output token ids list? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good catch, changed the return types to be tuples. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I have changed the approach here to protect accesses to prompt_token_ids and output_token_ids. Now, it uses a class MonitoredList that records a timestamp of the last update, and based on that, the cached all tokens is updated. I did in this way to avoid changing all usages of the prompt/output token ids due to tuple change and also it avoids unnecessary copies of list => tuples which are also expensive. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 Yard1 reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator Author alexm-redhat Jun 29, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 found out that there is actually an issue with the deserialization with ray, so I have removed this and made the prompt/output token_ids accessors return tuples. It introduces a conversion for the output_token_ids to tuple but it seems not to be bad and the performance is still good. To make it work, I have propagated the tuple type upward in the vllm software stack, since we don't expect seq_data users to use these accessors to change data (but only via the append_token() function) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel commented Jun 21, 2024 ok looking All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cadedaniel reviewed Jun 26, 2024 View reviewed changes Copy link Collaborator cadedaniel left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment most comments are nits. big question is the design change around CoW/promotion (I think it's actually a bad design change). let's schedule some time to go over this sync as I think it will be faster than back and forth. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions examples/offline_inference.py Outdated Comment on lines 14 to 16 llm = LLM(model="facebook/opt-125m") llm = LLM(model="facebook/opt-125m", use_v2_block_manager=True, enable_prefix_caching=True) Copy link Collaborator cadedaniel Jun 20, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Let's leave this out for now Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 26, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment good catch, removed Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/core/block/test_prefix_caching_block.py first_chain = TestPrefixCachingBlockAllocator.create_immutable_chain( block_size=block_size, token_ids=token_ids, allocator=allocator, ) # mark all blocks in first chain as computed allocator.mark_blocks_as_computed(blocks) Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment TODO(cade) see why this api is no longer required Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated from vllm.utils import Device, cdiv, chunk_list # This class is an optimization to allow fast-access to physical block ids Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Let's write this as a docstring Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment suggest writing also how it achieves the optimization (can write docstrings for individual functions but it's more tedious) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated from vllm.utils import Device, cdiv, chunk_list # This class is an optimization to allow fast-access to physical block ids class BlockList: Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: would be great to have basic unit tests for this helper Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated from vllm.utils import Device, cdiv, chunk_list # This class is an optimization to allow fast-access to physical block ids class BlockList: Copy link Collaborator cadedaniel Jun 21, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: I have preference for putting helper methods/functions below the main class of the file, so the file can be read top-down Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment moved to block/common.py Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 15 hidden conversations Load more… vllm/core/block_manager_v2.py Outdated Comment on lines 103 to 104 self._cached_computed_seq_blocks: Dict[SeqId, List[int]] = {} self._seq_last_access: Dict[SeqId, float] = {} Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment what's the motivation for raising these to BlockManger level? we should keep things simple at this layer unless there's good reason not to Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment There was a significant overhead in these function calls, since they traversed the full block lists. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Can we modify the API such that it allows caching the result // we don't have to traverse the full block lists? Two downsides: we expose more complexity in this layer than is necessary (this is tech debt we can live with, if it's too hard) we make it harder for other block managers to use prefix caching (we may have a block manager which specializes for another type, e.g. the newer models which use sliding window + normal attention). Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This is a good idea. I have refactored this logic out to two classes: ComputedBlocksTracker and LastAccessBlocksTracker so it will be easier to port the logic to other places. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager_v2.py Outdated Comment on lines 239 to 240 # TODO: Ask Cade how it may be possible to have # allocated block id inside the evictor Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment let's go over this Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager_v2.py block_ids = self.block_tables[seq.seq_id].physical_block_ids assert all(b is not None for b in block_ids) Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment awesome Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager_v2.py Outdated @@ -274,6 +285,43 @@ def mark_blocks_as_computed(self, seq_group: SequenceGroup): # So this function is useless for block_v2. pass def get_and_update_computed_block_ids(self, seqs): Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment docstring / typing Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/sequence.py Outdated @property def prompt_token_ids(self) -> List[int]: return self._prompt_token_ids Copy link Collaborator cadedaniel Jun 25, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment yeah, what happens if someone modifies the prompt token ids / output token ids list? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions alexm-redhat commented Jun 27, 2024 View reviewed changes Copy link Collaborator Author alexm-redhat left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Updated the PR with addressed review comments from Cade and Yard1. I have moved the CoW and Promo functionality back to the block and ensured that there is no new _free_block_id() interface to minimize interface changes. Also, I had moved the code a bit inside the prefix-caching allocator to make it more readable and easier to maintain. Verified that performance is still good, for both standard and prefix-cached runs. TODO: Fixing tests now Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/naive_block.py Outdated Comment on lines 12 to 19 # Used to pre-allocate block objects, in order to avoid excessive python # object allocations/deallocations. # The pool starts from "pool_size" objects and will increase to more objects # if necessary # # Note that multiple block objects may point to the same physical block id, # which is why this pool is needed, so that it will be easier to support # prefix caching and more complicated sharing of physical blocks. Copy link Collaborator Author alexm-redhat Jun 26, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Added docstring and moved BlockPool class to block/common.py Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py @@ -19,6 +19,28 @@ _DEFAULT_LAST_ACCESSED_TIME = -1 class BlockTracker: Copy link Collaborator Author alexm-redhat Jun 26, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated from vllm.utils import Device, cdiv, chunk_list # This class is an optimization to allow fast-access to physical block ids class BlockList: Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment moved to block/common.py Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated return self._block_ids def append_token_ids_and_update_allocator( Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Removed this function in favor of moving this logic back into block class Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated block: Block, token_ids: List[int], allocator: DeviceAwareBlockAllocator) -> Block: new_block = allocator.cow_block_if_not_appendable(block) if new_block: Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Removed Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 7 hidden conversations Load more… vllm/sequence.py Outdated @property def prompt_token_ids(self) -> List[int]: return self._prompt_token_ids Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good catch, changed the return types to be tuples. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated from vllm.utils import Device, cdiv, chunk_list # This class is an optimization to allow fast-access to physical block ids Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py elif block_id in self.evictor: self.evictor.update(block_id, now) else: raise ValueError( "Mark block as accessed which is not belonged to GPU") def mark_blocks_as_computed(self, block_ids: List[int]) -> None: """Mark blocks as computed , used in prefix caching.""" raise NotImplementedError("Marking as computed is incremental") Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment For prefix caching, a block is "computed" when it is full, so it is possible to use the block.content_hash as the indicator for computed or not computed without the need from the scheduler to explicitly state it. Which is why the original implementation was not doing anything for that case, and this function was never called. I simply replaced the code with an error exception just to make sure it is indeed not used. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py Outdated self._update_num_token_ids() def _update_num_token_ids(self): Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager_v2.py Outdated Comment on lines 103 to 104 self._cached_computed_seq_blocks: Dict[SeqId, List[int]] = {} self._seq_last_access: Dict[SeqId, float] = {} Copy link Collaborator Author alexm-redhat Jun 27, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment There was a significant overhead in these function calls, since they traversed the full block lists. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member DarkLight1337 commented Jun 28, 2024 To speed up the CI queue for #5905 , I've cancelled the distributed tests for the latest CI run in this PR since they won't pass anyway until #5905 has been merged. Please merge main into your branch after that happens so that the CI can pass once again. 👍 2 cadedaniel and alexm-redhat reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cadedaniel reviewed Jun 28, 2024 View reviewed changes vllm/core/block/block_table.py Outdated self._num_full_slots = len(token_ids) def update(self, blocks): Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: typing Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py elif block_id in self.evictor: self.evictor.update(block_id, now) else: raise ValueError( "Mark block as accessed which is not belonged to GPU") def mark_blocks_as_computed(self, block_ids: List[int]) -> None: """Mark blocks as computed , used in prefix caching.""" raise NotImplementedError("Marking as computed is incremental") Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Sounds good. let's delete the API? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/common.py allocator=self._allocator, block_id=None)) def increase_pool(self): Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: docstrings on public methods Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment mark_blocks_as_computed still used in block_manager_v1 added docstring Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction vllm/core/block/block_table.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block/cpu_gpu_block_allocator.py Outdated Comment on lines 298 to 328 raise NotImplementedError device = Device.GPU return self._allocators[device].promote_to_immutable_block(block) Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment do we need this implementation and cow_block_if_not_appendable ? technically, vLLM does not support modification of block content for CPU-based allocators Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I assume this method is only invoked when appending tokens Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment yeah Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment add some comment when it's used? (I think they should be removed but seems I miss a case) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment You actually right, this is cpu-gpu allocator, so it is not doing the actual CoW or promo, since it is done only by the specific Naive or Prefix allocators, and they have these functions define via the base class BlockAllocator. Good catch! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction vllm/core/block/cpu_gpu_block_allocator.py Outdated Comment on lines 376 to 379 if self._proxy.token_ids: return len(self._proxy.token_ids) else: return 0 Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Did you see my comment about token_ids being optional? It adds more complexity to the API, and leaks abstraction details here and other places that need to check if it's None before deciding behavior. If we want a no-op token id List for the undefined blocks, we can have a class which implements List and always returns 0 for len / raises NotImplemented for anything that writes. that way we don't have Optional / no branches checking for it everywhere Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 29, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I was able to remove the Optional from token_ids. Now it is the same as before. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 cadedaniel reacted with heart emoji All reactions ❤️ 1 reaction vllm/core/block_manager_v2.py Outdated Comment on lines 103 to 104 self._cached_computed_seq_blocks: Dict[SeqId, List[int]] = {} self._seq_last_access: Dict[SeqId, float] = {} Copy link Collaborator cadedaniel Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Can we modify the API such that it allows caching the result // we don't have to traverse the full block lists? Two downsides: we expose more complexity in this layer than is necessary (this is tech debt we can live with, if it's too hard) we make it harder for other block managers to use prefix caching (we may have a block manager which specializes for another type, e.g. the newer models which use sliding window + normal attention). Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Yard1 reviewed Jun 28, 2024 View reviewed changes vllm/core/block_manager_v2.py Outdated Comment on lines 315 to 326 self._cached_computed_seq_blocks[seq_id] = computed_block_ids else: computed_block_ids = self._cached_computed_seq_blocks[seq_id] if len(computed_block_ids) < len(block_ids): # Incremental init for seq_id => Look only at the new blocks computed_block_ids = self.block_allocator.get_computed_block_ids( # noqa: E501 computed_block_ids, block_ids) self._cached_computed_seq_blocks[ seq_id] = computed_block_ids else: # Cache HIT assert len(computed_block_ids) == len(block_ids) Copy link Collaborator Yard1 Jun 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This will still result in constant recomputation in the worst case. I think we can do the following: After the first run, if len(computed_block_ids) != len(block_ids) , we know that we will never add any extra blocks to computed_block_ids (since we'd have a gap otherwise). Therefore, we should save that as a boolean in the cache alongside the computed block ids In the subsequent runs, if the seq_id is present in cache, but the boolean is False, we just return the cached computed block ids without calling get_computed_block_ids . Otherwise, if the boolean is true, we call get_computed_block_ids for the new blocks and save in cache, with the len(computed_block_ids) == len(block_ids) boolean. let me know if this makes sense? I may be missing something here. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Yard1 Jun 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Here's the suggested change: def _get_and_update_computed_block_ids ( self , seqs ): """Handles caching of per-sequence computed block ids. When a sequence appears for the first time, it traverses all of the blocks and detects the prefix of blocks that is computed. On the subsequent times, it only traverses the new blocks that were added and updates the already recorded prefix of blocks with the newly computed blocks. """ ret = [] for seq in seqs : seq_id = seq . seq_id # Get block ids of this sequence, while not considering the # last block block_ids = self . block_tables [ seq_id ]. physical_block_ids [: - 1 ] # Here we cache the detection of computed_block_ids for seq_id. # Since computed_block_ids form a prefix of block_ids, # the first time we see seq_id, we detect computed_block_ids # fully and store them in the cache. In the next times we see # seq_id, we detect computed_block_ids incrementally, by looking # only at the new blocks that come after the cached # computed_block_ids if seq_id not in self . _cached_computed_seq_blocks : # First time init for seq_id => Detect fully computed_block_ids = self . block_allocator . get_computed_block_ids ( # noqa: E501 [], block_ids ) self . _cached_computed_seq_blocks [ seq_id ] = ( computed_block_ids , len ( computed_block_ids ) >= len ( block_ids ) - 1 ) else : computed_block_ids , should_continue_adding = self . _cached_computed_seq_blocks [ seq_id ] if should_continue_adding : if len ( computed_block_ids ) < len ( block_ids ): # Incremental init for seq_id => Look only at the new blocks computed_block_ids = self . block_allocator . get_computed_block_ids ( # noqa: E501 computed_block_ids , block_ids ) self . _cached_computed_seq_blocks [ seq_id ] = ( computed_block_ids , len ( computed_block_ids ) >= len ( block_ids ) - 1 ) else : # Cache HIT assert len ( computed_block_ids ) == len ( block_ids ) ret . append ( computed_block_ids ) return ret Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 cadedaniel reacted with heart emoji All reactions ❤️ 1 reaction Copy link Collaborator Author alexm-redhat Jun 29, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 and I discussed this in more detail and this is a really good suggestion that should help with performance. Will add this to the algorithm. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jun 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 Added your idea inside. All works. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions alexm-redhat force-pushed the block_manager_v2_perf branch
from 0cd4aae to ac9cbdc Compare June 30, 2024 11:50 Copy link Collaborator Author alexm-redhat commented Jun 30, 2024 @cadedaniel @Yard1 I have addressed the review comments, the PR is ready for a pass. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . alexm-redhat added 9 commits July 1, 2024 00:02 Optimize block_manager_v2 so it becomes the default 007b32d cleanups ea94e85 refactor code so that only free() is used e21c410 prefix_caching: refactor self._blocks to tracked blocks b5872d2 format 54d76ba cpu bug fix 0aecdb2 fixes d649055 fixes 92550b0 fix immutable promotion 4100268 23 hidden items Load more… alexm-redhat added 14 commits July 1, 2024 00:02 Refactor back token_ids based on Cade comments. b74d834 use tuples for seq_data prompt/output token_ids 179542b sync 7c0ce65 fix 4dd957e fix tests 325226f fix tests 29e9683 add Antoni's idea for improving caching of computed block ids by usin… … c36f353 …g the gap detection Based on Cade comment, refactored the seq last_access and cached comp… … d0b2ef9 …uted blocks dicts to be encapsulated inside classes instead of simply embedded in block_manager_v2 cleanup bd65468 Cade's comments 3064208 fix test 2236d5e fix fork_seq 4ea6938 ping 82b31e8 ping2 3f1c2a1 alexm-redhat force-pushed the block_manager_v2_perf branch
from 6854308 to 3f1c2a1 Compare July 1, 2024 00:03 cadedaniel mentioned this pull request Jul 1, 2024 [misc][optimization] optimize data structure in allocator #5968 Closed cadedaniel reviewed Jul 1, 2024 View reviewed changes Copy link Collaborator cadedaniel left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment small comments only, let's go! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated block.append_token_ids(token_block) self._blocks[idx] = block # Refresh the cached block_id Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment is this still necessary? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I redid the code so it is hidden inside the BlockList (by adding append_token_ids(block_idx, tokens) api func) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 1 cadedaniel reacted with hooray emoji All reactions 🎉 1 reaction vllm/core/block/block_table.py Outdated Comment on lines 301 to 303 cur_token_ids = block.token_ids if cur_token_ids is not None: token_ids.extend(cur_token_ids) Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Remove check now that it can't be None? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good catch! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/block_table.py Outdated Comment on lines 308 to 309 if not self._is_allocated: return 0 Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: I think we don't need this branch anymore. if it's not allocated, self.blocks will be empty Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice, removed Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/common.py Comment on lines +129 to +130 assert src_block_id is not None assert trg_block_id is not None Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: a little weird that we check a non-Optional is not None. but my guess it's due to python typing weakness... can ignore Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I changed the type to Optional[BlockId], I think it makes more sense Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction vllm/core/block/cpu_gpu_block_allocator.py Outdated Comment on lines 298 to 328 raise NotImplementedError device = Device.GPU return self._allocators[device].promote_to_immutable_block(block) Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment add some comment when it's used? (I think they should be removed but seems I miss a case) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 1 hidden conversation Load more… vllm/core/block/interfaces.py Outdated pass @abstractmethod def promote_to_immutable_block(self, block: Block) -> BlockId: """NOTE: This should not be used besides Block""" Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment suggest keeping the NOTE in Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Added Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py Comment on lines +315 to +321 """Decrements the refcount of the block. The block may be in two possible states: (1) immutable/cached or (2) mutable/hashless. In the first case, the refcount is decremented directly and the block may be possibly added to the evictor. In other case, hashless allocator free(..) with keep_block_object=True is called to only free the block id (since the block object may be reused by the caller) """ Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment love this :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py Outdated @@ -658,6 +801,7 @@ def content_hash(self) -> Optional[int]: if prev_block_hash is None and not is_first_block: return None assert len(self.token_ids) > 0 Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: do we need this assert given if not self.is_full ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment You right, removed Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block/prefix_caching_block.py Comment on lines +850 to +851 Note that currently, for a given sequence, we also skip the last block id for caching purposes, to avoid caching of a full sequence Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment does this work with lookahead scheduling (where potenially >1 block is modified in single step)? don't have to fix now but in the future we want speculative decoding x prefix caching to work Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think it should work since the blocks that are used for appending or speculative tokens won't be marked as computed, so they won't go into the common cache prefix. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction vllm/core/block/prefix_caching_block.py Comment on lines +918 to +921 class LastAccessBlocksTracker: """Manages the last access time of the tracked sequences, in order to allow an efficient update of allocator's block last access times """ Copy link Collaborator cadedaniel Jul 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment ❤️ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions alexm-redhat added 2 commits July 1, 2024 15:03 Cade's comments 2ff442d more Cade commants 3322f8c Copy link Collaborator Author alexm-redhat commented Jul 1, 2024 @cadedaniel fixed the nits, thanks for catching these issues! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cadedaniel approved these changes Jul 2, 2024 View reviewed changes Copy link Collaborator cadedaniel left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the excellent contribution! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions cadedaniel merged commit 3476ed0 into vllm-project : main Jul 2, 2024 kzawora-intel added a commit
to HabanaAI/vllm-fork
that referenced
this pull request Jul 2, 2024 habana_main rebase ( #71 ) … 5e1a565 * [Hardware][Intel] Optimize CPU backend and add more performance tips ( vllm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]>
* [Docs] Add 4th meetup slides ( vllm-project#5509 )
* [Misc] Add vLLM version getter to utils ( vllm-project#5098 )
* [CI/Build] Simplify OpenAI server setup in tests ( vllm-project#5100 )
* [Doc] Update LLaVA docs ( vllm-project#5437 )
Co-authored-by: Roger Wang <[email protected]>
* [Kernel] Factor out epilogues from cutlass kernels ( vllm-project#5391 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* [MISC] Remove FP8 warning ( vllm-project#5472 )
Co-authored-by: Philipp Moritz <[email protected]>
* Seperate dev requirements into lint and test ( vllm-project#5474 )
* Revert "[Core] Remove unnecessary copies in flash attn backend" ( vllm-project#5478 )
* [misc] fix format.sh ( vllm-project#5511 )
* [CI/Build] Disable test_fp8.py ( vllm-project#5508 )
* [Kernel] Disable CUTLASS kernels for fp8 ( vllm-project#5505 )
* Add `cuda_device_count_stateless` ( vllm-project#5473 )
* [Hardware][Intel] Support CPU inference with AVX2 ISA ( vllm-project#5452 )
* [Misc] Fix arg names in quantizer script ( vllm-project#5507 )
* bump version to v0.5.0.post1 ( vllm-project#5522 )
* [CI/Build][Misc] Add CI that benchmarks vllm performance on those PRs with `perf-benchmarks` label ( vllm-project#5073 )
Co-authored-by: simon-mo <[email protected]>
* [CI/Build] Disable LLaVA-NeXT CPU test ( vllm-project#5529 )
* [Kernel] Fix CUTLASS 3.x custom broadcast load epilogue ( vllm-project#5516 )
* [Misc] Fix arg names ( vllm-project#5524 )
* [ Misc ] Rs/compressed tensors cleanup ( vllm-project#5432 )
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
* [Kernel] Suppress mma.sp warning on CUDA 12.5 and later ( vllm-project#5401 )
* [mis] fix flaky test of test_cuda_device_count_stateless ( vllm-project#5546 )
* [Core] Remove duplicate processing in async engine ( vllm-project#5525 )
* [misc][distributed] fix benign error in `is_in_the_same_node` ( vllm-project#5512 )
* [Docs] Add ZhenFund as a Sponsor ( vllm-project#5548 )
* [Doc] Update documentation on Tensorizer ( vllm-project#5471 )
* [Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models ( vllm-project#5460 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix typo in Pallas backend ( vllm-project#5558 )
* [Core][Distributed] improve p2p cache generation ( vllm-project#5528 )
* Add ccache to amd ( vllm-project#5555 )
* [Core][Bugfix]: fix prefix caching for blockv2 ( vllm-project#5364 )
Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
* [mypy] Enable type checking for test directory ( vllm-project#5017 )
* [CI/Build] Test both text and token IDs in batched OpenAI Completions API ( vllm-project#5568 )
* [misc] Do not allow to use lora with chunked prefill. ( vllm-project#5538 )
Co-authored-by: Cyrus Leung <[email protected]>
* add gptq_marlin test for bug report vllm-project#5088 ( vllm-project#5145 )
* [BugFix] Don't start a Ray cluster when not using Ray ( vllm-project#5570 )
* [Fix] Correct OpenAI batch response format ( vllm-project#5554 )
* Add basic correctness 2 GPU tests to 4 GPU pipeline ( vllm-project#5518 )
* [CI][BugFix] Flip is_quant_method_supported condition ( vllm-project#5577 )
* [build][misc] limit numpy version ( vllm-project#5582 )
* [Doc] add debugging tips for crash and multi-node debugging ( vllm-project#5581 )
* Fix w8a8 benchmark and add Llama-3-8B ( vllm-project#5562 )
* [Model] Rename Phi3 rope scaling type ( vllm-project#5595 )
* Correct alignment in the seq_len diagram. ( vllm-project#5592 )
Co-authored-by: Liqian Chen <[email protected]>
* [Kernel] `compressed-tensors` marlin 24 support ( vllm-project#5435 )
* [Misc] use AutoTokenizer for benchmark serving when vLLM not installed ( vllm-project#5588 )
* [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend ( vllm-project#3814 )
Co-authored-by: Jiang Li <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
* [CI/BUILD] Support non-AVX512 vLLM building and testing ( vllm-project#5574 )
* [CI] the readability of benchmarking and prepare for dashboard ( vllm-project#5571 )
[CI] Improve the readability of performance benchmarking results and prepare for upcoming performance dashboard ( vllm-project#5571 )
* [bugfix][distributed] fix 16 gpus local rank arrangement ( vllm-project#5604 )
* [Optimization] use a pool to reuse LogicalTokenBlock.token_ids ( vllm-project#5584 )
* [Bugfix] Fix KV head calculation for MPT models when using GQA ( vllm-project#5142 )
* [Fix] Use utf-8 encoding in entrypoints/openai/run_batch.py ( vllm-project#5606 )
* [Speculative Decoding 1/2 ] Add typical acceptance sampling as one of the sampling techniques in the verifier ( vllm-project#5131 )
* [Model] Initialize Phi-3-vision support ( vllm-project#4986 )
* [Kernel] Add punica dimensions for Granite 13b ( vllm-project#5559 )
Signed-off-by: Joe Runde <[email protected]>
* [misc][typo] fix typo ( vllm-project#5620 )
* [Misc] Fix typo ( vllm-project#5618 )
* [CI] Avoid naming different metrics with the same name in performance benchmark ( vllm-project#5615 )
* [bugfix][distributed] improve p2p capability test ( vllm-project#5612 )
[bugfix][distributed] do not error if two processes do not agree on p2p capability ( vllm-project#5612 )
* [Misc] Remove import from transformers logging ( vllm-project#5625 )
* [CI/Build][Misc] Update Pytest Marker for VLMs ( vllm-project#5623 )
* [ci] Deprecate original CI template ( vllm-project#5624 )
Signed-off-by: kevin <[email protected]>
* [Misc] Add OpenTelemetry support ( vllm-project#4687 )
This PR adds basic support for OpenTelemetry distributed tracing.
It includes changes to enable tracing functionality and improve monitoring capabilities.
I've also added a markdown with print-screens to guide users how to use this feature. You can find it here
* [Misc] Add channel-wise quantization support for w8a8 dynamic per token activation quantization ( vllm-project#5542 )
* [ci] Setup Release pipeline and build release wheels with cache ( vllm-project#5610 )
Signed-off-by: kevin <[email protected]>
* [Model] LoRA support added for command-r ( vllm-project#5178 )
* [Bugfix] Fix for inconsistent behaviour related to sampling and repetition penalties ( vllm-project#5639 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Doc] Added cerebrium as Integration option ( vllm-project#5553 )
* [Bugfix] Fix CUDA version check for mma warning suppression ( vllm-project#5642 )
* [Bugfix] Fix w8a8 benchmarks for int8 case ( vllm-project#5643 )
* [Bugfix] Fix Phi-3 Long RoPE scaling implementation ( vllm-project#5628 )
* [Bugfix] Added test for sampling repetition penalty bug. ( vllm-project#5659 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix][CI/Build][AMD][ROCm]Fixed the cmake build bug which generate garbage on certain devices ( vllm-project#5641 )
* [misc][distributed] use 127.0.0.1 for single-node ( vllm-project#5619 )
* [Model] Add FP8 kv cache for Qwen2 ( vllm-project#5656 )
* [Bugfix] Fix sampling_params passed incorrectly in Phi3v example ( vllm-project#5684 )
* [Misc]Add param max-model-len in benchmark_latency.py ( vllm-project#5629 )
* [CI/Build] Add tqdm to dependencies ( vllm-project#5680 )
* [ci] Add A100 queue into AWS CI template ( vllm-project#5648 )
Signed-off-by: kevin <[email protected]>
* [Frontend][Bugfix] Fix preemption_mode -> preemption-mode for CLI arg in arg_utils.py ( vllm-project#5688 )
* [ci][distributed] add tests for custom allreduce ( vllm-project#5689 )
* [Bugfix] AsyncLLMEngine hangs with asyncio.run ( vllm-project#5654 )
* [Doc] Update docker references ( vllm-project#5614 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Misc] Add per channel support for static activation quantization; update w8a8 schemes to share base classes ( vllm-project#5650 )
* [ci] Limit num gpus if specified for A100 ( vllm-project#5694 )
Signed-off-by: kevin <[email protected]>
* [Misc] Improve conftest ( vllm-project#5681 )
* [Bugfix][Doc] FIx Duplicate Explicit Target Name Errors ( vllm-project#5703 )
* [Kernel] Update Cutlass int8 kernel configs for SM90 ( vllm-project#5514 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Model] Port over CLIPVisionModel for VLMs ( vllm-project#5591 )
* [Kernel] Update Cutlass int8 kernel configs for SM80 ( vllm-project#5275 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix] Fix the CUDA version check for FP8 support in the CUTLASS kernels ( vllm-project#5715 )
* [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names ( vllm-project#5718 )
* [distributed][misc] use fork by default for mp ( vllm-project#5669 )
* [Model] MLPSpeculator speculative decoding support ( vllm-project#4947 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
* [Kernel] Add punica dimension for Qwen2 LoRA ( vllm-project#5441 )
* [BugFix] Fix test_phi3v.py ( vllm-project#5725 )
* [Bugfix] Add fully sharded layer for QKVParallelLinearWithLora ( vllm-project#5665 )
Co-authored-by: Antoni Baum <[email protected]>
* [Core][Distributed] add shm broadcast ( vllm-project#5399 )
Co-authored-by: Cody Yu <[email protected]>
* [Kernel][CPU] Add Quick `gelu` to CPU ( vllm-project#5717 )
* [Doc] Documentation on supported hardware for quantization methods ( vllm-project#5745 )
* [BugFix] exclude version 1.15.0 for modelscope ( vllm-project#5668 )
* [ci][test] fix ca test in main ( vllm-project#5746 )
* [LoRA] Add support for pinning lora adapters in the LRU cache ( vllm-project#5603 )
* [CI][Hardware][Intel GPU] add Intel GPU(XPU) ci pipeline ( vllm-project#5616 )
* [Model] Support Qwen-VL and Qwen-VL-Chat models with text-only inputs ( vllm-project#5710 )
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Remove vllm-project#4789 workaround left in vllm/entrypoints/openai/run_batch.py ( vllm-project#5756 )
* [Bugfix] Fix pin_lora error in TPU executor ( vllm-project#5760 )
* [Docs][TPU] Add installation tip for TPU ( vllm-project#5761 )
* [core][distributed] improve shared memory broadcast ( vllm-project#5754 )
* [BugFix] [Kernel] Add Cutlass2x fallback kernels ( vllm-project#5744 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Distributed] Add send and recv helpers ( vllm-project#5719 )
* [Bugfix] Add phi3v resize for dynamic shape and fix torchvision requirement ( vllm-project#5772 )
* [doc][faq] add warning to download models for every nodes ( vllm-project#5783 )
* post-rebase api adjustments
* [Doc] Add "Suggest edit" button to doc pages ( vllm-project#5789 )
* [Doc] Add Phi-3-medium to list of supported models ( vllm-project#5788 )
* [Bugfix] Fix FlexibleArgumentParser replaces _ with - for actual args ( vllm-project#5795 )
* [ci] Remove aws template ( vllm-project#5757 )
Signed-off-by: kevin <[email protected]>
* [Doc] Add notice about breaking changes to VLMs ( vllm-project#5818 )
* [Speculative Decoding] Support draft model on different tensor-parallel size than target model ( vllm-project#5414 )
* add pin_lora to habana components
* add WA for model loader
* fix api mismatches with ray
* tensor parallel fixes
* workers cpu alignment fix
* [Misc] Remove useless code in cpu_worker ( vllm-project#5824 )
* prefill/decode metadata fixes
* [Core] Add fault tolerance for `RayTokenizerGroupPool` ( vllm-project#5748 )
* re-enable attn metadata trimming
* worker_use_ray fix
* [doc][distributed] add both gloo and nccl tests ( vllm-project#5834 )
* [CI/Build] Add unit testing for FlexibleArgumentParser ( vllm-project#5798 )
* [Misc] Update `w4a16` `compressed-tensors` support to include `w8a16` ( vllm-project#5794 )
* [Hardware][TPU] Refactor TPU backend ( vllm-project#5831 )
* [Hardware][AMD][CI/Build][Doc] Upgrade to ROCm 6.1, Dockerfile improvements, test fixes ( vllm-project#5422 )
* [Hardware][TPU] Raise errors for unsupported sampling params ( vllm-project#5850 )
* [CI/Build] Add E2E tests for MLPSpeculator ( vllm-project#5791 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix assertion in NeuronExecutor ( vllm-project#5841 )
* [Core] Refactor Worker and ModelRunner to consolidate control plane communication ( vllm-project#5408 )
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Co-authored-by: Stephanie <[email protected]>
* [Misc][Doc] Add Example of using OpenAI Server with VLM ( vllm-project#5832 )
* [bugfix][distributed] fix shm broadcast when the queue size is full ( vllm-project#5801 )
* [Bugfix] Fix embedding to support 2D inputs ( vllm-project#5829 )
* [Bugfix][TPU] Fix KV cache size calculation ( vllm-project#5860 )
* [CI/Build] Refactor image test assets ( vllm-project#5821 )
* [Kernel] Adding bias epilogue support for `cutlass_scaled_mm` ( vllm-project#5560 )
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Frontend] Add tokenize/detokenize endpoints ( vllm-project#5054 )
* [Hardware][TPU] Support parallel sampling & Swapping ( vllm-project#5855 )
* [Bugfix][TPU] Fix CPU cache allocation ( vllm-project#5869 )
* Support CPU inference with VSX PowerPC ISA ( vllm-project#5652 )
* [doc] update usage of env var to avoid conflict ( vllm-project#5873 )
* [Misc] Add example for LLaVA-NeXT ( vllm-project#5879 )
* [BugFix] Fix cuda graph for MLPSpeculator ( vllm-project#5875 )
Co-authored-by: Abhinav Goyal <[email protected]>
* [Doc] Add note about context length in Phi-3-Vision example ( vllm-project#5887 )
* [VLM][Bugfix] Make sure that `multi_modal_kwargs` is broadcasted properly ( vllm-project#5880 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Model] Add base class for LoRA-supported models ( vllm-project#5018 )
* [Bugfix] Fix img_sizes Parsing in Phi3-Vision ( vllm-project#5888 )
* [CI/Build] [1/3] Reorganize entrypoints tests ( vllm-project#5526 )
* add collective crash WA
* add comment to the weird mark_step
* [Model][Bugfix] Implicit model flags and reenable Phi-3-Vision ( vllm-project#5896 )
* [doc][misc] add note for Kubernetes users ( vllm-project#5916 )
* [BugFix] Fix `MLPSpeculator` handling of `num_speculative_tokens` ( vllm-project#5876 )
* [BugFix] Fix `min_tokens` behaviour for multiple eos tokens ( vllm-project#5849 )
* [CI/Build] Fix Args for `_get_logits_warper` in Sampler Test ( vllm-project#5922 )
* [Model] Add Gemma 2 ( vllm-project#5908 )
* [core][misc] remove logical block ( vllm-project#5882 )
* [Kernel][ROCm][AMD] fused_moe Triton configs v2 for mi300X ( vllm-project#5932 )
* [Hardware][TPU] Optimize KV cache swapping ( vllm-project#5878 )
* [VLM][BugFix] Make sure that `multi_modal_kwargs` can broadcast properly with ring buffer. ( vllm-project#5905 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Bugfix][Hardware][Intel CPU] Fix unpassed multi_modal_kwargs for CPU runner ( vllm-project#5956 )
* [Core] Registry for processing model inputs ( vllm-project#5214 )
Co-authored-by: ywang96 <[email protected]>
* Unmark fused_moe config json file as executable ( vllm-project#5960 )
* [Hardware][Intel] OpenVINO vLLM backend ( vllm-project#5379 )
* [Bugfix] Better error message for MLPSpeculator when `num_speculative_tokens` is set too high ( vllm-project#5894 )
Signed-off-by: Thomas Parnell <[email protected]>
* [CI/Build] [2/3] Reorganize entrypoints tests ( vllm-project#5904 )
* [Distributed] Make it clear that % should not be in tensor dict keys. ( vllm-project#5927 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Spec Decode] Introduce DraftModelRunner ( vllm-project#5799 )
* [Bugfix] Fix compute datatype for cutlass 3.x epilogues ( vllm-project#5931 )
* [ Misc ] Remove `fp8_shard_indexer` from Col/Row Parallel Linear (Simplify Weight Loading) ( vllm-project#5928 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [ Bugfix ] Enabling Loading Models With Fused QKV/MLP on Disk with FP8 ( vllm-project#5921 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* Support Deepseek-V2 ( vllm-project#4650 )
Co-authored-by: Philipp Moritz <[email protected]>
* [Bugfix] Only add `Attention.kv_scale` if kv cache quantization is enabled ( vllm-project#5936 )
* Unmark more files as executable ( vllm-project#5962 )
* [Bugfix] Fix Engine Failing After Invalid Request - AsyncEngineDeadError ( vllm-project#5963 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Kernel] Flashinfer for prefill & decode, with Cudagraph support for decode ( vllm-project#4628 )
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
* [Bugfix][TPU] Fix TPU sampler output ( vllm-project#5978 )
* [Bugfix][TPU] Fix pad slot id ( vllm-project#5977 )
* [Bugfix] fix missing last itl in openai completions benchmark ( vllm-project#5926 )
* [Misc] Extend vLLM Metrics logging API ( vllm-project#5925 )
Co-authored-by: Antoni Baum <[email protected]>
* [Kernel] Add punica dimensions for Granite 3b and 8b ( vllm-project#5930 )
Signed-off-by: Joe Runde <[email protected]>
* [Bugfix] Fix precisions in Gemma 1 ( vllm-project#5913 )
* [Misc] Update Phi-3-Vision Example ( vllm-project#5981 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Support `eos_token_id` from `config.json` ( vllm-project#5954 )
* [Core] Optimize `SequenceStatus.is_finished` by switching to IntEnum ( vllm-project#5974 )
* [Kernel] Raise an exception in MoE kernel if the batch size is larger then 65k ( vllm-project#5939 )
* [ CI/Build ] Added E2E Test For Compressed Tensors ( vllm-project#5839 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [CI/Build] Add TP test for vision models ( vllm-project#5892 )
* [ CI/Build ] LM Eval Harness Based CI Testing ( vllm-project#5838 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Bugfix][CI/Build][Hardware][AMD] Install matching torchvision to fix AMD tests ( vllm-project#5949 )
* [CI/Build] Temporarily Remove Phi3-Vision from TP Test ( vllm-project#5989 )
* [CI/Build] Reuse code for checking output consistency ( vllm-project#5988 )
* [CI/Build] [3/3] Reorganize entrypoints tests ( vllm-project#5966 )
* [ci][distributed] fix device count call
[ci][distributed] fix some cuda init that makes it necessary to use spawn ( vllm-project#5991 )
* [Frontend]: Support base64 embedding ( vllm-project#5935 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Lora] Use safetensor keys instead of adapter_config.json to find unexpected modules. ( vllm-project#5909 )
Co-authored-by: sang <[email protected]>
* [ CI ] Temporarily Disable Large LM-Eval Tests ( vllm-project#6005 )
Co-authored-by: [email protected] <rshaw@neuralmagic>
* [Misc] Fix `get_min_capability` ( vllm-project#5971 )
* [ Misc ] Refactor w8a8 to use `process_weights_after_load` (Simplify Weight Loading) ( vllm-project#5940 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [misc][cuda] use nvml to avoid accidentally cuda initialization ( vllm-project#6007 )
* [Speculative Decoding 2/2 ] Integrate typical acceptance sampler into Spec Decode Worker ( vllm-project#5348 )
* Revert test changes
* cleanup
* llm engine cleanup
* utils.py cleanup
* custom ops refactor
* move xops to ops
* remove vllm/hpu/attn_bias.py
* whitespace fix
* revert accidental changes in rmsnorm
* Fix hpugraph hashing
* add trim_attn_metadata comment
* fix prompt bucketing:
* [ CI ] Re-enable Large Model LM Eval ( vllm-project#6031 )
* [doc][misc] remove deprecated api server in doc ( vllm-project#6037 )
* [Misc] update benchmark backend for scalellm ( vllm-project#6018 )
* [doc][misc] further lower visibility of simple api server ( vllm-project#6041 )
Co-authored-by: Simon Mo <[email protected]>
* [Bugfix] Use RayActorError for older versions of Ray in RayTokenizerGroupPool ( vllm-project#6039 )
* [Bugfix] adding chunking mechanism to fused_moe to handle large inputs ( vllm-project#6029 )
* add FAQ doc under 'serving' ( vllm-project#5946 )
* [Bugfix][Doc] Fix Doc Formatting ( vllm-project#6048 )
* [Bugfix] Add explicit `end_forward` calls to flashinfer ( vllm-project#6044 )
* [BugFix] Ensure worker model loop is always stopped at the right time ( vllm-project#5987 )
* [Frontend] Relax api url assertion for openai benchmarking ( vllm-project#6046 )
* [Model] Changes to MLPSpeculator to support tie_weights and input_scale ( vllm-project#5965 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
* [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) ( vllm-project#5602 )
* [Frontend] Add template related params to request ( vllm-project#5709 )
* [VLM] Remove `image_input_type` from VLM config ( vllm-project#5852 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Doc] Reinstate doc dependencies ( vllm-project#6061 )
* guard model loader wa for hpu
---------
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Lei Wen <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Signed-off-by: Xiaowei Jiang <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Jianan Gu <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Philipp Moritz <[email protected]>
Co-authored-by: Antoni Baum <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Allen.Dou <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: SangBin Cho <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: Charles Riggins <[email protected]>
Co-authored-by: Liqian Chen <[email protected]>
Co-authored-by: zhyncs <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Bruce Fontaine <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Chang Su <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Ronen Schaffer <[email protected]>
Co-authored-by: sergey-tinkoff <[email protected]>
Co-authored-by: milo157 <[email protected]>
Co-authored-by: Shukant Pal <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: DearPlanet <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Jee Li <[email protected]>
Co-authored-by: rohithkrn <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Woo-Yeon Lee <[email protected]>
Co-authored-by: Matt Wong <[email protected]>
Co-authored-by: aws-patlange <[email protected]>
Co-authored-by: Stephanie Wang <[email protected]>
Co-authored-by: Stephanie <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: sasha0552 <[email protected]>
Co-authored-by: Chip Kerchner <[email protected]>
Co-authored-by: Abhinav Goyal <[email protected]>
Co-authored-by: xwjiang2010 <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
Co-authored-by: wangding zeng <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
Co-authored-by: mcalman <[email protected]>
Co-authored-by: William Lin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: llmpros <[email protected]>
Co-authored-by: sang <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: James Whedbee <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: danieljannai21 <[email protected]> CatherineSue reviewed Jul 2, 2024 View reviewed changes vllm/core/block/prefix_caching_block.py from os.path import commonprefix from typing import Dict, FrozenSet, Iterable, List, Optional, Tuple from vllm.core.block.common import (CopyOnWriteTracker, get_all_blocks_recursively) from vllm.core.block.interfaces import Block, BlockAllocator, BlockId, Device from vllm.core.block.naive_block import NaiveBlock, NaiveBlockAllocator from vllm.core.block.naive_block import (BlockPool, NaiveBlock, Copy link Contributor CatherineSue Jul 2, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment qq: Why import BlockPool from vllm.core.block.naive_block instead of vllm.core.block.common ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author alexm-redhat Jul 2, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Forgot to change it. It was originally in naive_block. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 CatherineSue reacted with thumbs up emoji All reactions 👍 1 reaction prashantgupta24 pushed a commit
to opendatahub-io/vllm
that referenced
this pull request Jul 3, 2024 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … 549c660 …ult) ( vllm-project#5602 ) robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Jul 7, 2024 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … 77f588c …ult) ( vllm-project#5602 ) xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 8, 2024 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … efecae2 …ult) ( vllm-project#5602 ) xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … efceec4 …ult) ( vllm-project#5602 ) Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … 4795da5 …ult) ( vllm-project#5602 )
Signed-off-by: Alvant <[email protected]> LeiWang1999 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Mar 26, 2025 [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 defa… … a93ba05 …ult) ( vllm-project#5602 )
Signed-off-by: LeiWang1999 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:40
|
7c01f706418d593b3cf23d2ec9110dca7151c539
|
https://github.com/vllm-project/vllm/pull/5974
| true | true | true | true |
LM_EVAL: LM-Eval | PERF: itl, benchmark serving, Optimization | SERVING: serving, serving, API server | TEST: test, test, test
|
Copy link Collaborator Yard1 commented Jun 28, 2024 This is a small performance tweak - we call SequenceStatus.is_finished very often, and each time we used to create a list. By switching to an IntEnum , we can do a simple is greater comparison, speeding things up. PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Optimize SequenceStatus.is_finished by switching to IntEnum 2df4810 youkaichao approved these changes Jun 28, 2024 View reviewed changes Copy link Member youkaichao left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Yard1 enabled auto-merge (squash) June 28, 2024 23:42 Yard1 merged commit 7c01f70 into main Jun 29, 2024 robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Jul 1, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … 270105d …llm-project#5974 ) prashantgupta24 pushed a commit
to opendatahub-io/vllm
that referenced
this pull request Jul 1, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … 01789ba …llm-project#5974 ) prashantgupta24 pushed a commit
to opendatahub-io/vllm
that referenced
this pull request Jul 1, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … 4951f09 …llm-project#5974 ) kzawora-intel added a commit
to HabanaAI/vllm-fork
that referenced
this pull request Jul 2, 2024 habana_main rebase ( #71 ) … 5e1a565 * [Hardware][Intel] Optimize CPU backend and add more performance tips ( vllm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]>
* [Docs] Add 4th meetup slides ( vllm-project#5509 )
* [Misc] Add vLLM version getter to utils ( vllm-project#5098 )
* [CI/Build] Simplify OpenAI server setup in tests ( vllm-project#5100 )
* [Doc] Update LLaVA docs ( vllm-project#5437 )
Co-authored-by: Roger Wang <[email protected]>
* [Kernel] Factor out epilogues from cutlass kernels ( vllm-project#5391 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* [MISC] Remove FP8 warning ( vllm-project#5472 )
Co-authored-by: Philipp Moritz <[email protected]>
* Seperate dev requirements into lint and test ( vllm-project#5474 )
* Revert "[Core] Remove unnecessary copies in flash attn backend" ( vllm-project#5478 )
* [misc] fix format.sh ( vllm-project#5511 )
* [CI/Build] Disable test_fp8.py ( vllm-project#5508 )
* [Kernel] Disable CUTLASS kernels for fp8 ( vllm-project#5505 )
* Add `cuda_device_count_stateless` ( vllm-project#5473 )
* [Hardware][Intel] Support CPU inference with AVX2 ISA ( vllm-project#5452 )
* [Misc] Fix arg names in quantizer script ( vllm-project#5507 )
* bump version to v0.5.0.post1 ( vllm-project#5522 )
* [CI/Build][Misc] Add CI that benchmarks vllm performance on those PRs with `perf-benchmarks` label ( vllm-project#5073 )
Co-authored-by: simon-mo <[email protected]>
* [CI/Build] Disable LLaVA-NeXT CPU test ( vllm-project#5529 )
* [Kernel] Fix CUTLASS 3.x custom broadcast load epilogue ( vllm-project#5516 )
* [Misc] Fix arg names ( vllm-project#5524 )
* [ Misc ] Rs/compressed tensors cleanup ( vllm-project#5432 )
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
* [Kernel] Suppress mma.sp warning on CUDA 12.5 and later ( vllm-project#5401 )
* [mis] fix flaky test of test_cuda_device_count_stateless ( vllm-project#5546 )
* [Core] Remove duplicate processing in async engine ( vllm-project#5525 )
* [misc][distributed] fix benign error in `is_in_the_same_node` ( vllm-project#5512 )
* [Docs] Add ZhenFund as a Sponsor ( vllm-project#5548 )
* [Doc] Update documentation on Tensorizer ( vllm-project#5471 )
* [Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models ( vllm-project#5460 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix typo in Pallas backend ( vllm-project#5558 )
* [Core][Distributed] improve p2p cache generation ( vllm-project#5528 )
* Add ccache to amd ( vllm-project#5555 )
* [Core][Bugfix]: fix prefix caching for blockv2 ( vllm-project#5364 )
Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
* [mypy] Enable type checking for test directory ( vllm-project#5017 )
* [CI/Build] Test both text and token IDs in batched OpenAI Completions API ( vllm-project#5568 )
* [misc] Do not allow to use lora with chunked prefill. ( vllm-project#5538 )
Co-authored-by: Cyrus Leung <[email protected]>
* add gptq_marlin test for bug report vllm-project#5088 ( vllm-project#5145 )
* [BugFix] Don't start a Ray cluster when not using Ray ( vllm-project#5570 )
* [Fix] Correct OpenAI batch response format ( vllm-project#5554 )
* Add basic correctness 2 GPU tests to 4 GPU pipeline ( vllm-project#5518 )
* [CI][BugFix] Flip is_quant_method_supported condition ( vllm-project#5577 )
* [build][misc] limit numpy version ( vllm-project#5582 )
* [Doc] add debugging tips for crash and multi-node debugging ( vllm-project#5581 )
* Fix w8a8 benchmark and add Llama-3-8B ( vllm-project#5562 )
* [Model] Rename Phi3 rope scaling type ( vllm-project#5595 )
* Correct alignment in the seq_len diagram. ( vllm-project#5592 )
Co-authored-by: Liqian Chen <[email protected]>
* [Kernel] `compressed-tensors` marlin 24 support ( vllm-project#5435 )
* [Misc] use AutoTokenizer for benchmark serving when vLLM not installed ( vllm-project#5588 )
* [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend ( vllm-project#3814 )
Co-authored-by: Jiang Li <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
* [CI/BUILD] Support non-AVX512 vLLM building and testing ( vllm-project#5574 )
* [CI] the readability of benchmarking and prepare for dashboard ( vllm-project#5571 )
[CI] Improve the readability of performance benchmarking results and prepare for upcoming performance dashboard ( vllm-project#5571 )
* [bugfix][distributed] fix 16 gpus local rank arrangement ( vllm-project#5604 )
* [Optimization] use a pool to reuse LogicalTokenBlock.token_ids ( vllm-project#5584 )
* [Bugfix] Fix KV head calculation for MPT models when using GQA ( vllm-project#5142 )
* [Fix] Use utf-8 encoding in entrypoints/openai/run_batch.py ( vllm-project#5606 )
* [Speculative Decoding 1/2 ] Add typical acceptance sampling as one of the sampling techniques in the verifier ( vllm-project#5131 )
* [Model] Initialize Phi-3-vision support ( vllm-project#4986 )
* [Kernel] Add punica dimensions for Granite 13b ( vllm-project#5559 )
Signed-off-by: Joe Runde <[email protected]>
* [misc][typo] fix typo ( vllm-project#5620 )
* [Misc] Fix typo ( vllm-project#5618 )
* [CI] Avoid naming different metrics with the same name in performance benchmark ( vllm-project#5615 )
* [bugfix][distributed] improve p2p capability test ( vllm-project#5612 )
[bugfix][distributed] do not error if two processes do not agree on p2p capability ( vllm-project#5612 )
* [Misc] Remove import from transformers logging ( vllm-project#5625 )
* [CI/Build][Misc] Update Pytest Marker for VLMs ( vllm-project#5623 )
* [ci] Deprecate original CI template ( vllm-project#5624 )
Signed-off-by: kevin <[email protected]>
* [Misc] Add OpenTelemetry support ( vllm-project#4687 )
This PR adds basic support for OpenTelemetry distributed tracing.
It includes changes to enable tracing functionality and improve monitoring capabilities.
I've also added a markdown with print-screens to guide users how to use this feature. You can find it here
* [Misc] Add channel-wise quantization support for w8a8 dynamic per token activation quantization ( vllm-project#5542 )
* [ci] Setup Release pipeline and build release wheels with cache ( vllm-project#5610 )
Signed-off-by: kevin <[email protected]>
* [Model] LoRA support added for command-r ( vllm-project#5178 )
* [Bugfix] Fix for inconsistent behaviour related to sampling and repetition penalties ( vllm-project#5639 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Doc] Added cerebrium as Integration option ( vllm-project#5553 )
* [Bugfix] Fix CUDA version check for mma warning suppression ( vllm-project#5642 )
* [Bugfix] Fix w8a8 benchmarks for int8 case ( vllm-project#5643 )
* [Bugfix] Fix Phi-3 Long RoPE scaling implementation ( vllm-project#5628 )
* [Bugfix] Added test for sampling repetition penalty bug. ( vllm-project#5659 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix][CI/Build][AMD][ROCm]Fixed the cmake build bug which generate garbage on certain devices ( vllm-project#5641 )
* [misc][distributed] use 127.0.0.1 for single-node ( vllm-project#5619 )
* [Model] Add FP8 kv cache for Qwen2 ( vllm-project#5656 )
* [Bugfix] Fix sampling_params passed incorrectly in Phi3v example ( vllm-project#5684 )
* [Misc]Add param max-model-len in benchmark_latency.py ( vllm-project#5629 )
* [CI/Build] Add tqdm to dependencies ( vllm-project#5680 )
* [ci] Add A100 queue into AWS CI template ( vllm-project#5648 )
Signed-off-by: kevin <[email protected]>
* [Frontend][Bugfix] Fix preemption_mode -> preemption-mode for CLI arg in arg_utils.py ( vllm-project#5688 )
* [ci][distributed] add tests for custom allreduce ( vllm-project#5689 )
* [Bugfix] AsyncLLMEngine hangs with asyncio.run ( vllm-project#5654 )
* [Doc] Update docker references ( vllm-project#5614 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Misc] Add per channel support for static activation quantization; update w8a8 schemes to share base classes ( vllm-project#5650 )
* [ci] Limit num gpus if specified for A100 ( vllm-project#5694 )
Signed-off-by: kevin <[email protected]>
* [Misc] Improve conftest ( vllm-project#5681 )
* [Bugfix][Doc] FIx Duplicate Explicit Target Name Errors ( vllm-project#5703 )
* [Kernel] Update Cutlass int8 kernel configs for SM90 ( vllm-project#5514 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Model] Port over CLIPVisionModel for VLMs ( vllm-project#5591 )
* [Kernel] Update Cutlass int8 kernel configs for SM80 ( vllm-project#5275 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix] Fix the CUDA version check for FP8 support in the CUTLASS kernels ( vllm-project#5715 )
* [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names ( vllm-project#5718 )
* [distributed][misc] use fork by default for mp ( vllm-project#5669 )
* [Model] MLPSpeculator speculative decoding support ( vllm-project#4947 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
* [Kernel] Add punica dimension for Qwen2 LoRA ( vllm-project#5441 )
* [BugFix] Fix test_phi3v.py ( vllm-project#5725 )
* [Bugfix] Add fully sharded layer for QKVParallelLinearWithLora ( vllm-project#5665 )
Co-authored-by: Antoni Baum <[email protected]>
* [Core][Distributed] add shm broadcast ( vllm-project#5399 )
Co-authored-by: Cody Yu <[email protected]>
* [Kernel][CPU] Add Quick `gelu` to CPU ( vllm-project#5717 )
* [Doc] Documentation on supported hardware for quantization methods ( vllm-project#5745 )
* [BugFix] exclude version 1.15.0 for modelscope ( vllm-project#5668 )
* [ci][test] fix ca test in main ( vllm-project#5746 )
* [LoRA] Add support for pinning lora adapters in the LRU cache ( vllm-project#5603 )
* [CI][Hardware][Intel GPU] add Intel GPU(XPU) ci pipeline ( vllm-project#5616 )
* [Model] Support Qwen-VL and Qwen-VL-Chat models with text-only inputs ( vllm-project#5710 )
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Remove vllm-project#4789 workaround left in vllm/entrypoints/openai/run_batch.py ( vllm-project#5756 )
* [Bugfix] Fix pin_lora error in TPU executor ( vllm-project#5760 )
* [Docs][TPU] Add installation tip for TPU ( vllm-project#5761 )
* [core][distributed] improve shared memory broadcast ( vllm-project#5754 )
* [BugFix] [Kernel] Add Cutlass2x fallback kernels ( vllm-project#5744 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Distributed] Add send and recv helpers ( vllm-project#5719 )
* [Bugfix] Add phi3v resize for dynamic shape and fix torchvision requirement ( vllm-project#5772 )
* [doc][faq] add warning to download models for every nodes ( vllm-project#5783 )
* post-rebase api adjustments
* [Doc] Add "Suggest edit" button to doc pages ( vllm-project#5789 )
* [Doc] Add Phi-3-medium to list of supported models ( vllm-project#5788 )
* [Bugfix] Fix FlexibleArgumentParser replaces _ with - for actual args ( vllm-project#5795 )
* [ci] Remove aws template ( vllm-project#5757 )
Signed-off-by: kevin <[email protected]>
* [Doc] Add notice about breaking changes to VLMs ( vllm-project#5818 )
* [Speculative Decoding] Support draft model on different tensor-parallel size than target model ( vllm-project#5414 )
* add pin_lora to habana components
* add WA for model loader
* fix api mismatches with ray
* tensor parallel fixes
* workers cpu alignment fix
* [Misc] Remove useless code in cpu_worker ( vllm-project#5824 )
* prefill/decode metadata fixes
* [Core] Add fault tolerance for `RayTokenizerGroupPool` ( vllm-project#5748 )
* re-enable attn metadata trimming
* worker_use_ray fix
* [doc][distributed] add both gloo and nccl tests ( vllm-project#5834 )
* [CI/Build] Add unit testing for FlexibleArgumentParser ( vllm-project#5798 )
* [Misc] Update `w4a16` `compressed-tensors` support to include `w8a16` ( vllm-project#5794 )
* [Hardware][TPU] Refactor TPU backend ( vllm-project#5831 )
* [Hardware][AMD][CI/Build][Doc] Upgrade to ROCm 6.1, Dockerfile improvements, test fixes ( vllm-project#5422 )
* [Hardware][TPU] Raise errors for unsupported sampling params ( vllm-project#5850 )
* [CI/Build] Add E2E tests for MLPSpeculator ( vllm-project#5791 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix assertion in NeuronExecutor ( vllm-project#5841 )
* [Core] Refactor Worker and ModelRunner to consolidate control plane communication ( vllm-project#5408 )
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Co-authored-by: Stephanie <[email protected]>
* [Misc][Doc] Add Example of using OpenAI Server with VLM ( vllm-project#5832 )
* [bugfix][distributed] fix shm broadcast when the queue size is full ( vllm-project#5801 )
* [Bugfix] Fix embedding to support 2D inputs ( vllm-project#5829 )
* [Bugfix][TPU] Fix KV cache size calculation ( vllm-project#5860 )
* [CI/Build] Refactor image test assets ( vllm-project#5821 )
* [Kernel] Adding bias epilogue support for `cutlass_scaled_mm` ( vllm-project#5560 )
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Frontend] Add tokenize/detokenize endpoints ( vllm-project#5054 )
* [Hardware][TPU] Support parallel sampling & Swapping ( vllm-project#5855 )
* [Bugfix][TPU] Fix CPU cache allocation ( vllm-project#5869 )
* Support CPU inference with VSX PowerPC ISA ( vllm-project#5652 )
* [doc] update usage of env var to avoid conflict ( vllm-project#5873 )
* [Misc] Add example for LLaVA-NeXT ( vllm-project#5879 )
* [BugFix] Fix cuda graph for MLPSpeculator ( vllm-project#5875 )
Co-authored-by: Abhinav Goyal <[email protected]>
* [Doc] Add note about context length in Phi-3-Vision example ( vllm-project#5887 )
* [VLM][Bugfix] Make sure that `multi_modal_kwargs` is broadcasted properly ( vllm-project#5880 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Model] Add base class for LoRA-supported models ( vllm-project#5018 )
* [Bugfix] Fix img_sizes Parsing in Phi3-Vision ( vllm-project#5888 )
* [CI/Build] [1/3] Reorganize entrypoints tests ( vllm-project#5526 )
* add collective crash WA
* add comment to the weird mark_step
* [Model][Bugfix] Implicit model flags and reenable Phi-3-Vision ( vllm-project#5896 )
* [doc][misc] add note for Kubernetes users ( vllm-project#5916 )
* [BugFix] Fix `MLPSpeculator` handling of `num_speculative_tokens` ( vllm-project#5876 )
* [BugFix] Fix `min_tokens` behaviour for multiple eos tokens ( vllm-project#5849 )
* [CI/Build] Fix Args for `_get_logits_warper` in Sampler Test ( vllm-project#5922 )
* [Model] Add Gemma 2 ( vllm-project#5908 )
* [core][misc] remove logical block ( vllm-project#5882 )
* [Kernel][ROCm][AMD] fused_moe Triton configs v2 for mi300X ( vllm-project#5932 )
* [Hardware][TPU] Optimize KV cache swapping ( vllm-project#5878 )
* [VLM][BugFix] Make sure that `multi_modal_kwargs` can broadcast properly with ring buffer. ( vllm-project#5905 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Bugfix][Hardware][Intel CPU] Fix unpassed multi_modal_kwargs for CPU runner ( vllm-project#5956 )
* [Core] Registry for processing model inputs ( vllm-project#5214 )
Co-authored-by: ywang96 <[email protected]>
* Unmark fused_moe config json file as executable ( vllm-project#5960 )
* [Hardware][Intel] OpenVINO vLLM backend ( vllm-project#5379 )
* [Bugfix] Better error message for MLPSpeculator when `num_speculative_tokens` is set too high ( vllm-project#5894 )
Signed-off-by: Thomas Parnell <[email protected]>
* [CI/Build] [2/3] Reorganize entrypoints tests ( vllm-project#5904 )
* [Distributed] Make it clear that % should not be in tensor dict keys. ( vllm-project#5927 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Spec Decode] Introduce DraftModelRunner ( vllm-project#5799 )
* [Bugfix] Fix compute datatype for cutlass 3.x epilogues ( vllm-project#5931 )
* [ Misc ] Remove `fp8_shard_indexer` from Col/Row Parallel Linear (Simplify Weight Loading) ( vllm-project#5928 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [ Bugfix ] Enabling Loading Models With Fused QKV/MLP on Disk with FP8 ( vllm-project#5921 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* Support Deepseek-V2 ( vllm-project#4650 )
Co-authored-by: Philipp Moritz <[email protected]>
* [Bugfix] Only add `Attention.kv_scale` if kv cache quantization is enabled ( vllm-project#5936 )
* Unmark more files as executable ( vllm-project#5962 )
* [Bugfix] Fix Engine Failing After Invalid Request - AsyncEngineDeadError ( vllm-project#5963 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Kernel] Flashinfer for prefill & decode, with Cudagraph support for decode ( vllm-project#4628 )
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
* [Bugfix][TPU] Fix TPU sampler output ( vllm-project#5978 )
* [Bugfix][TPU] Fix pad slot id ( vllm-project#5977 )
* [Bugfix] fix missing last itl in openai completions benchmark ( vllm-project#5926 )
* [Misc] Extend vLLM Metrics logging API ( vllm-project#5925 )
Co-authored-by: Antoni Baum <[email protected]>
* [Kernel] Add punica dimensions for Granite 3b and 8b ( vllm-project#5930 )
Signed-off-by: Joe Runde <[email protected]>
* [Bugfix] Fix precisions in Gemma 1 ( vllm-project#5913 )
* [Misc] Update Phi-3-Vision Example ( vllm-project#5981 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Support `eos_token_id` from `config.json` ( vllm-project#5954 )
* [Core] Optimize `SequenceStatus.is_finished` by switching to IntEnum ( vllm-project#5974 )
* [Kernel] Raise an exception in MoE kernel if the batch size is larger then 65k ( vllm-project#5939 )
* [ CI/Build ] Added E2E Test For Compressed Tensors ( vllm-project#5839 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [CI/Build] Add TP test for vision models ( vllm-project#5892 )
* [ CI/Build ] LM Eval Harness Based CI Testing ( vllm-project#5838 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Bugfix][CI/Build][Hardware][AMD] Install matching torchvision to fix AMD tests ( vllm-project#5949 )
* [CI/Build] Temporarily Remove Phi3-Vision from TP Test ( vllm-project#5989 )
* [CI/Build] Reuse code for checking output consistency ( vllm-project#5988 )
* [CI/Build] [3/3] Reorganize entrypoints tests ( vllm-project#5966 )
* [ci][distributed] fix device count call
[ci][distributed] fix some cuda init that makes it necessary to use spawn ( vllm-project#5991 )
* [Frontend]: Support base64 embedding ( vllm-project#5935 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Lora] Use safetensor keys instead of adapter_config.json to find unexpected modules. ( vllm-project#5909 )
Co-authored-by: sang <[email protected]>
* [ CI ] Temporarily Disable Large LM-Eval Tests ( vllm-project#6005 )
Co-authored-by: [email protected] <rshaw@neuralmagic>
* [Misc] Fix `get_min_capability` ( vllm-project#5971 )
* [ Misc ] Refactor w8a8 to use `process_weights_after_load` (Simplify Weight Loading) ( vllm-project#5940 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [misc][cuda] use nvml to avoid accidentally cuda initialization ( vllm-project#6007 )
* [Speculative Decoding 2/2 ] Integrate typical acceptance sampler into Spec Decode Worker ( vllm-project#5348 )
* Revert test changes
* cleanup
* llm engine cleanup
* utils.py cleanup
* custom ops refactor
* move xops to ops
* remove vllm/hpu/attn_bias.py
* whitespace fix
* revert accidental changes in rmsnorm
* Fix hpugraph hashing
* add trim_attn_metadata comment
* fix prompt bucketing:
* [ CI ] Re-enable Large Model LM Eval ( vllm-project#6031 )
* [doc][misc] remove deprecated api server in doc ( vllm-project#6037 )
* [Misc] update benchmark backend for scalellm ( vllm-project#6018 )
* [doc][misc] further lower visibility of simple api server ( vllm-project#6041 )
Co-authored-by: Simon Mo <[email protected]>
* [Bugfix] Use RayActorError for older versions of Ray in RayTokenizerGroupPool ( vllm-project#6039 )
* [Bugfix] adding chunking mechanism to fused_moe to handle large inputs ( vllm-project#6029 )
* add FAQ doc under 'serving' ( vllm-project#5946 )
* [Bugfix][Doc] Fix Doc Formatting ( vllm-project#6048 )
* [Bugfix] Add explicit `end_forward` calls to flashinfer ( vllm-project#6044 )
* [BugFix] Ensure worker model loop is always stopped at the right time ( vllm-project#5987 )
* [Frontend] Relax api url assertion for openai benchmarking ( vllm-project#6046 )
* [Model] Changes to MLPSpeculator to support tie_weights and input_scale ( vllm-project#5965 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
* [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) ( vllm-project#5602 )
* [Frontend] Add template related params to request ( vllm-project#5709 )
* [VLM] Remove `image_input_type` from VLM config ( vllm-project#5852 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Doc] Reinstate doc dependencies ( vllm-project#6061 )
* guard model loader wa for hpu
---------
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Lei Wen <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Signed-off-by: Xiaowei Jiang <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Jianan Gu <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Philipp Moritz <[email protected]>
Co-authored-by: Antoni Baum <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Allen.Dou <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: SangBin Cho <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: Charles Riggins <[email protected]>
Co-authored-by: Liqian Chen <[email protected]>
Co-authored-by: zhyncs <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Bruce Fontaine <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Chang Su <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Ronen Schaffer <[email protected]>
Co-authored-by: sergey-tinkoff <[email protected]>
Co-authored-by: milo157 <[email protected]>
Co-authored-by: Shukant Pal <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: DearPlanet <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Jee Li <[email protected]>
Co-authored-by: rohithkrn <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Woo-Yeon Lee <[email protected]>
Co-authored-by: Matt Wong <[email protected]>
Co-authored-by: aws-patlange <[email protected]>
Co-authored-by: Stephanie Wang <[email protected]>
Co-authored-by: Stephanie <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: sasha0552 <[email protected]>
Co-authored-by: Chip Kerchner <[email protected]>
Co-authored-by: Abhinav Goyal <[email protected]>
Co-authored-by: xwjiang2010 <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
Co-authored-by: wangding zeng <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
Co-authored-by: mcalman <[email protected]>
Co-authored-by: William Lin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: llmpros <[email protected]>
Co-authored-by: sang <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: James Whedbee <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: danieljannai21 <[email protected]> xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 8, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … 0fd7504 …llm-project#5974 ) xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … faa80a2 …llm-project#5974 ) Alvant pushed a commit
to compressa-ai/vllm
that referenced
this pull request Oct 26, 2024 [Core] Optimize SequenceStatus.is_finished by switching to IntEnum ( v… … c349e81 …llm-project#5974 )
Signed-off-by: Alvant <[email protected]> simon-mo deleted the sequence_status_tweak branch October 28, 2024 16:51 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:43
|
80aa7e91fcd547a7a1396f71b9bdce18e5c92245
|
https://github.com/vllm-project/vllm/pull/4971
| true | true | true | true |
LM_EVAL: LM-Eval | PERF: TTFT, itl, benchmark serving | SERVING: serving, serving, API server | TEST: test, test, test
|
Copy link Member bigPYJ1151 commented May 22, 2024 This PR optimized CPU backend performance and added more performance tips. Optimized input shape of torch_sdpa to use fast code path for better TTFT (~40% reduction). Added tip and example to use TCMalloc, it will significantly improve the performance. Initially integrated Paged attention from Intel Extension for PyTorch. Updated related doc. PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 6 zhouyuan, jikunshang, DamonFool, WoosukKwon, AllenDou, and ivanbaldo reacted with thumbs up emoji ❤️ 1 ivanbaldo reacted with heart emoji All reactions 👍 6 reactions ❤️ 1 reaction bigPYJ1151 mentioned this pull request May 22, 2024 [RFC] Initial Support for CPUs #3654 Closed 4 tasks bigPYJ1151 force-pushed the ipex branch
from 382dd8a to 49924d7 Compare May 24, 2024 00:14 zhouyuan mentioned this pull request May 29, 2024 [CI/BUILD] enable intel queue for longer CPU tests #4113 Merged liangan1 mentioned this pull request May 31, 2024 [RFC] Speedup vLLM inference with Intel@ Extension for PyTorch* #2526 Closed bigPYJ1151 force-pushed the ipex branch
2 times, most recently
from 7acb607 to e7b7bb7 Compare June 4, 2024 06:07 bigPYJ1151 and others added 19 commits June 7, 2024 05:15 Add IPEX Paged Att. 980de13 Fix 648d4c0 Fix env cc00133 Refactor QKV shape in torch_sdpa to use fast code path. … 5e8b064 Co-authored-by: Jianan Gu <[email protected]> Refine 686a41b Update doc 706d14e Update docker image. 1647c27 Fix doc afe6262 trigger 76d319a trigger 62708ef fix f822617 Fix 5fffea9 Fix 0cda257 update b00a5a9 Fix … b88142a Fix
Fix
trigger Revert "Fix" … fea13c9 This reverts commit 58c036ad079bab6d4a7beccae735c096e2818e37. Revert "Revert "Fix"" … ce00ff0 This reverts commit 3861c15e282062c8c5165ce01aa93972280ca92a. Update IPEX 5779f70 update 3930932 bigPYJ1151 force-pushed the ipex branch
from e7b7bb7 to 3930932 Compare June 7, 2024 05:54 WoosukKwon added
the x86-cpu Related to Intel & AMD CPU label Jun 8, 2024 zhouyuan mentioned this pull request Jun 13, 2024 [Hardware][Intel] Support CPU inference with AVX2 ISA #5452 Merged Copy link Contributor zhouyuan commented Jun 13, 2024 @WoosukKwon Hi, gentle ping, could you please help to take a look on this patch when available? This patch has a big optimization for CPU backend thanks, -yuan 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . update torch 6c77c9e WoosukKwon self-assigned this Jun 13, 2024 WoosukKwon approved these changes Jun 13, 2024 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @bigPYJ1151 LGTM! Thanks for the PR and sorry for the delay. Left minor comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 zhouyuan reacted with thumbs up emoji All reactions 👍 1 reaction vllm/attention/backends/torch_sdpa.py Comment on lines +17 to +18 except ImportError: from vllm.attention.ops.paged_attn import PagedAttention Copy link Collaborator WoosukKwon Jun 13, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Can't we simply require users to use IPEX? In which case do we have to use the PagedAttention kernel in vLLM? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author bigPYJ1151 Jun 13, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yes, after the APIs in IPEX become stable we will add IPEX to the requirements so the users can use it directly. We want to leave the native kernel here to evaluate some latest features (e.g., 8bit KV cache) before the IPEX supports them and public release. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 ivanbaldo reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator WoosukKwon Jun 13, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I see. Thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions README.md Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/attention/ops/ipex_attn.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Update README.md … bdf030a Co-authored-by: Woosuk Kwon <[email protected]> WoosukKwon merged commit 80aa7e9 into vllm-project : main Jun 13, 2024 Copy link Contributor zhouyuan commented Jun 14, 2024 @WoosukKwon thank you for the review and merge, much appreciated! thanks, -yuan All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor DamonFool commented Jun 14, 2024 Hi @bigPYJ1151 , I tested the IPEX but seems no performance gain on CPU. Could you please tell us how can we test for the performance boost? Thanks. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Jun 16, 2024 [Hardware][Intel] Optimize CPU backend and add more performance tips ( v… … 45e1f25 …llm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]> joerunde pushed a commit
to joerunde/vllm
that referenced
this pull request Jun 17, 2024 [Hardware][Intel] Optimize CPU backend and add more performance tips ( v… … b51b458 …llm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]> xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jun 27, 2024 [Hardware][Intel] Optimize CPU backend and add more performance tips ( v… … 5e1e448 …llm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]> kzawora-intel added a commit
to HabanaAI/vllm-fork
that referenced
this pull request Jul 2, 2024 habana_main rebase ( #71 ) … 5e1a565 * [Hardware][Intel] Optimize CPU backend and add more performance tips ( vllm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]>
* [Docs] Add 4th meetup slides ( vllm-project#5509 )
* [Misc] Add vLLM version getter to utils ( vllm-project#5098 )
* [CI/Build] Simplify OpenAI server setup in tests ( vllm-project#5100 )
* [Doc] Update LLaVA docs ( vllm-project#5437 )
Co-authored-by: Roger Wang <[email protected]>
* [Kernel] Factor out epilogues from cutlass kernels ( vllm-project#5391 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
* [MISC] Remove FP8 warning ( vllm-project#5472 )
Co-authored-by: Philipp Moritz <[email protected]>
* Seperate dev requirements into lint and test ( vllm-project#5474 )
* Revert "[Core] Remove unnecessary copies in flash attn backend" ( vllm-project#5478 )
* [misc] fix format.sh ( vllm-project#5511 )
* [CI/Build] Disable test_fp8.py ( vllm-project#5508 )
* [Kernel] Disable CUTLASS kernels for fp8 ( vllm-project#5505 )
* Add `cuda_device_count_stateless` ( vllm-project#5473 )
* [Hardware][Intel] Support CPU inference with AVX2 ISA ( vllm-project#5452 )
* [Misc] Fix arg names in quantizer script ( vllm-project#5507 )
* bump version to v0.5.0.post1 ( vllm-project#5522 )
* [CI/Build][Misc] Add CI that benchmarks vllm performance on those PRs with `perf-benchmarks` label ( vllm-project#5073 )
Co-authored-by: simon-mo <[email protected]>
* [CI/Build] Disable LLaVA-NeXT CPU test ( vllm-project#5529 )
* [Kernel] Fix CUTLASS 3.x custom broadcast load epilogue ( vllm-project#5516 )
* [Misc] Fix arg names ( vllm-project#5524 )
* [ Misc ] Rs/compressed tensors cleanup ( vllm-project#5432 )
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
* [Kernel] Suppress mma.sp warning on CUDA 12.5 and later ( vllm-project#5401 )
* [mis] fix flaky test of test_cuda_device_count_stateless ( vllm-project#5546 )
* [Core] Remove duplicate processing in async engine ( vllm-project#5525 )
* [misc][distributed] fix benign error in `is_in_the_same_node` ( vllm-project#5512 )
* [Docs] Add ZhenFund as a Sponsor ( vllm-project#5548 )
* [Doc] Update documentation on Tensorizer ( vllm-project#5471 )
* [Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models ( vllm-project#5460 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix typo in Pallas backend ( vllm-project#5558 )
* [Core][Distributed] improve p2p cache generation ( vllm-project#5528 )
* Add ccache to amd ( vllm-project#5555 )
* [Core][Bugfix]: fix prefix caching for blockv2 ( vllm-project#5364 )
Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
* [mypy] Enable type checking for test directory ( vllm-project#5017 )
* [CI/Build] Test both text and token IDs in batched OpenAI Completions API ( vllm-project#5568 )
* [misc] Do not allow to use lora with chunked prefill. ( vllm-project#5538 )
Co-authored-by: Cyrus Leung <[email protected]>
* add gptq_marlin test for bug report vllm-project#5088 ( vllm-project#5145 )
* [BugFix] Don't start a Ray cluster when not using Ray ( vllm-project#5570 )
* [Fix] Correct OpenAI batch response format ( vllm-project#5554 )
* Add basic correctness 2 GPU tests to 4 GPU pipeline ( vllm-project#5518 )
* [CI][BugFix] Flip is_quant_method_supported condition ( vllm-project#5577 )
* [build][misc] limit numpy version ( vllm-project#5582 )
* [Doc] add debugging tips for crash and multi-node debugging ( vllm-project#5581 )
* Fix w8a8 benchmark and add Llama-3-8B ( vllm-project#5562 )
* [Model] Rename Phi3 rope scaling type ( vllm-project#5595 )
* Correct alignment in the seq_len diagram. ( vllm-project#5592 )
Co-authored-by: Liqian Chen <[email protected]>
* [Kernel] `compressed-tensors` marlin 24 support ( vllm-project#5435 )
* [Misc] use AutoTokenizer for benchmark serving when vLLM not installed ( vllm-project#5588 )
* [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend ( vllm-project#3814 )
Co-authored-by: Jiang Li <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
* [CI/BUILD] Support non-AVX512 vLLM building and testing ( vllm-project#5574 )
* [CI] the readability of benchmarking and prepare for dashboard ( vllm-project#5571 )
[CI] Improve the readability of performance benchmarking results and prepare for upcoming performance dashboard ( vllm-project#5571 )
* [bugfix][distributed] fix 16 gpus local rank arrangement ( vllm-project#5604 )
* [Optimization] use a pool to reuse LogicalTokenBlock.token_ids ( vllm-project#5584 )
* [Bugfix] Fix KV head calculation for MPT models when using GQA ( vllm-project#5142 )
* [Fix] Use utf-8 encoding in entrypoints/openai/run_batch.py ( vllm-project#5606 )
* [Speculative Decoding 1/2 ] Add typical acceptance sampling as one of the sampling techniques in the verifier ( vllm-project#5131 )
* [Model] Initialize Phi-3-vision support ( vllm-project#4986 )
* [Kernel] Add punica dimensions for Granite 13b ( vllm-project#5559 )
Signed-off-by: Joe Runde <[email protected]>
* [misc][typo] fix typo ( vllm-project#5620 )
* [Misc] Fix typo ( vllm-project#5618 )
* [CI] Avoid naming different metrics with the same name in performance benchmark ( vllm-project#5615 )
* [bugfix][distributed] improve p2p capability test ( vllm-project#5612 )
[bugfix][distributed] do not error if two processes do not agree on p2p capability ( vllm-project#5612 )
* [Misc] Remove import from transformers logging ( vllm-project#5625 )
* [CI/Build][Misc] Update Pytest Marker for VLMs ( vllm-project#5623 )
* [ci] Deprecate original CI template ( vllm-project#5624 )
Signed-off-by: kevin <[email protected]>
* [Misc] Add OpenTelemetry support ( vllm-project#4687 )
This PR adds basic support for OpenTelemetry distributed tracing.
It includes changes to enable tracing functionality and improve monitoring capabilities.
I've also added a markdown with print-screens to guide users how to use this feature. You can find it here
* [Misc] Add channel-wise quantization support for w8a8 dynamic per token activation quantization ( vllm-project#5542 )
* [ci] Setup Release pipeline and build release wheels with cache ( vllm-project#5610 )
Signed-off-by: kevin <[email protected]>
* [Model] LoRA support added for command-r ( vllm-project#5178 )
* [Bugfix] Fix for inconsistent behaviour related to sampling and repetition penalties ( vllm-project#5639 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Doc] Added cerebrium as Integration option ( vllm-project#5553 )
* [Bugfix] Fix CUDA version check for mma warning suppression ( vllm-project#5642 )
* [Bugfix] Fix w8a8 benchmarks for int8 case ( vllm-project#5643 )
* [Bugfix] Fix Phi-3 Long RoPE scaling implementation ( vllm-project#5628 )
* [Bugfix] Added test for sampling repetition penalty bug. ( vllm-project#5659 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix][CI/Build][AMD][ROCm]Fixed the cmake build bug which generate garbage on certain devices ( vllm-project#5641 )
* [misc][distributed] use 127.0.0.1 for single-node ( vllm-project#5619 )
* [Model] Add FP8 kv cache for Qwen2 ( vllm-project#5656 )
* [Bugfix] Fix sampling_params passed incorrectly in Phi3v example ( vllm-project#5684 )
* [Misc]Add param max-model-len in benchmark_latency.py ( vllm-project#5629 )
* [CI/Build] Add tqdm to dependencies ( vllm-project#5680 )
* [ci] Add A100 queue into AWS CI template ( vllm-project#5648 )
Signed-off-by: kevin <[email protected]>
* [Frontend][Bugfix] Fix preemption_mode -> preemption-mode for CLI arg in arg_utils.py ( vllm-project#5688 )
* [ci][distributed] add tests for custom allreduce ( vllm-project#5689 )
* [Bugfix] AsyncLLMEngine hangs with asyncio.run ( vllm-project#5654 )
* [Doc] Update docker references ( vllm-project#5614 )
Signed-off-by: Rafael Vasquez <[email protected]>
* [Misc] Add per channel support for static activation quantization; update w8a8 schemes to share base classes ( vllm-project#5650 )
* [ci] Limit num gpus if specified for A100 ( vllm-project#5694 )
Signed-off-by: kevin <[email protected]>
* [Misc] Improve conftest ( vllm-project#5681 )
* [Bugfix][Doc] FIx Duplicate Explicit Target Name Errors ( vllm-project#5703 )
* [Kernel] Update Cutlass int8 kernel configs for SM90 ( vllm-project#5514 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Model] Port over CLIPVisionModel for VLMs ( vllm-project#5591 )
* [Kernel] Update Cutlass int8 kernel configs for SM80 ( vllm-project#5275 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix] Fix the CUDA version check for FP8 support in the CUTLASS kernels ( vllm-project#5715 )
* [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names ( vllm-project#5718 )
* [distributed][misc] use fork by default for mp ( vllm-project#5669 )
* [Model] MLPSpeculator speculative decoding support ( vllm-project#4947 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
* [Kernel] Add punica dimension for Qwen2 LoRA ( vllm-project#5441 )
* [BugFix] Fix test_phi3v.py ( vllm-project#5725 )
* [Bugfix] Add fully sharded layer for QKVParallelLinearWithLora ( vllm-project#5665 )
Co-authored-by: Antoni Baum <[email protected]>
* [Core][Distributed] add shm broadcast ( vllm-project#5399 )
Co-authored-by: Cody Yu <[email protected]>
* [Kernel][CPU] Add Quick `gelu` to CPU ( vllm-project#5717 )
* [Doc] Documentation on supported hardware for quantization methods ( vllm-project#5745 )
* [BugFix] exclude version 1.15.0 for modelscope ( vllm-project#5668 )
* [ci][test] fix ca test in main ( vllm-project#5746 )
* [LoRA] Add support for pinning lora adapters in the LRU cache ( vllm-project#5603 )
* [CI][Hardware][Intel GPU] add Intel GPU(XPU) ci pipeline ( vllm-project#5616 )
* [Model] Support Qwen-VL and Qwen-VL-Chat models with text-only inputs ( vllm-project#5710 )
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Remove vllm-project#4789 workaround left in vllm/entrypoints/openai/run_batch.py ( vllm-project#5756 )
* [Bugfix] Fix pin_lora error in TPU executor ( vllm-project#5760 )
* [Docs][TPU] Add installation tip for TPU ( vllm-project#5761 )
* [core][distributed] improve shared memory broadcast ( vllm-project#5754 )
* [BugFix] [Kernel] Add Cutlass2x fallback kernels ( vllm-project#5744 )
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Distributed] Add send and recv helpers ( vllm-project#5719 )
* [Bugfix] Add phi3v resize for dynamic shape and fix torchvision requirement ( vllm-project#5772 )
* [doc][faq] add warning to download models for every nodes ( vllm-project#5783 )
* post-rebase api adjustments
* [Doc] Add "Suggest edit" button to doc pages ( vllm-project#5789 )
* [Doc] Add Phi-3-medium to list of supported models ( vllm-project#5788 )
* [Bugfix] Fix FlexibleArgumentParser replaces _ with - for actual args ( vllm-project#5795 )
* [ci] Remove aws template ( vllm-project#5757 )
Signed-off-by: kevin <[email protected]>
* [Doc] Add notice about breaking changes to VLMs ( vllm-project#5818 )
* [Speculative Decoding] Support draft model on different tensor-parallel size than target model ( vllm-project#5414 )
* add pin_lora to habana components
* add WA for model loader
* fix api mismatches with ray
* tensor parallel fixes
* workers cpu alignment fix
* [Misc] Remove useless code in cpu_worker ( vllm-project#5824 )
* prefill/decode metadata fixes
* [Core] Add fault tolerance for `RayTokenizerGroupPool` ( vllm-project#5748 )
* re-enable attn metadata trimming
* worker_use_ray fix
* [doc][distributed] add both gloo and nccl tests ( vllm-project#5834 )
* [CI/Build] Add unit testing for FlexibleArgumentParser ( vllm-project#5798 )
* [Misc] Update `w4a16` `compressed-tensors` support to include `w8a16` ( vllm-project#5794 )
* [Hardware][TPU] Refactor TPU backend ( vllm-project#5831 )
* [Hardware][AMD][CI/Build][Doc] Upgrade to ROCm 6.1, Dockerfile improvements, test fixes ( vllm-project#5422 )
* [Hardware][TPU] Raise errors for unsupported sampling params ( vllm-project#5850 )
* [CI/Build] Add E2E tests for MLPSpeculator ( vllm-project#5791 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Bugfix] Fix assertion in NeuronExecutor ( vllm-project#5841 )
* [Core] Refactor Worker and ModelRunner to consolidate control plane communication ( vllm-project#5408 )
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Co-authored-by: Stephanie <[email protected]>
* [Misc][Doc] Add Example of using OpenAI Server with VLM ( vllm-project#5832 )
* [bugfix][distributed] fix shm broadcast when the queue size is full ( vllm-project#5801 )
* [Bugfix] Fix embedding to support 2D inputs ( vllm-project#5829 )
* [Bugfix][TPU] Fix KV cache size calculation ( vllm-project#5860 )
* [CI/Build] Refactor image test assets ( vllm-project#5821 )
* [Kernel] Adding bias epilogue support for `cutlass_scaled_mm` ( vllm-project#5560 )
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Frontend] Add tokenize/detokenize endpoints ( vllm-project#5054 )
* [Hardware][TPU] Support parallel sampling & Swapping ( vllm-project#5855 )
* [Bugfix][TPU] Fix CPU cache allocation ( vllm-project#5869 )
* Support CPU inference with VSX PowerPC ISA ( vllm-project#5652 )
* [doc] update usage of env var to avoid conflict ( vllm-project#5873 )
* [Misc] Add example for LLaVA-NeXT ( vllm-project#5879 )
* [BugFix] Fix cuda graph for MLPSpeculator ( vllm-project#5875 )
Co-authored-by: Abhinav Goyal <[email protected]>
* [Doc] Add note about context length in Phi-3-Vision example ( vllm-project#5887 )
* [VLM][Bugfix] Make sure that `multi_modal_kwargs` is broadcasted properly ( vllm-project#5880 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Model] Add base class for LoRA-supported models ( vllm-project#5018 )
* [Bugfix] Fix img_sizes Parsing in Phi3-Vision ( vllm-project#5888 )
* [CI/Build] [1/3] Reorganize entrypoints tests ( vllm-project#5526 )
* add collective crash WA
* add comment to the weird mark_step
* [Model][Bugfix] Implicit model flags and reenable Phi-3-Vision ( vllm-project#5896 )
* [doc][misc] add note for Kubernetes users ( vllm-project#5916 )
* [BugFix] Fix `MLPSpeculator` handling of `num_speculative_tokens` ( vllm-project#5876 )
* [BugFix] Fix `min_tokens` behaviour for multiple eos tokens ( vllm-project#5849 )
* [CI/Build] Fix Args for `_get_logits_warper` in Sampler Test ( vllm-project#5922 )
* [Model] Add Gemma 2 ( vllm-project#5908 )
* [core][misc] remove logical block ( vllm-project#5882 )
* [Kernel][ROCm][AMD] fused_moe Triton configs v2 for mi300X ( vllm-project#5932 )
* [Hardware][TPU] Optimize KV cache swapping ( vllm-project#5878 )
* [VLM][BugFix] Make sure that `multi_modal_kwargs` can broadcast properly with ring buffer. ( vllm-project#5905 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Bugfix][Hardware][Intel CPU] Fix unpassed multi_modal_kwargs for CPU runner ( vllm-project#5956 )
* [Core] Registry for processing model inputs ( vllm-project#5214 )
Co-authored-by: ywang96 <[email protected]>
* Unmark fused_moe config json file as executable ( vllm-project#5960 )
* [Hardware][Intel] OpenVINO vLLM backend ( vllm-project#5379 )
* [Bugfix] Better error message for MLPSpeculator when `num_speculative_tokens` is set too high ( vllm-project#5894 )
Signed-off-by: Thomas Parnell <[email protected]>
* [CI/Build] [2/3] Reorganize entrypoints tests ( vllm-project#5904 )
* [Distributed] Make it clear that % should not be in tensor dict keys. ( vllm-project#5927 )
Signed-off-by: Xiaowei Jiang <[email protected]>
* [Spec Decode] Introduce DraftModelRunner ( vllm-project#5799 )
* [Bugfix] Fix compute datatype for cutlass 3.x epilogues ( vllm-project#5931 )
* [ Misc ] Remove `fp8_shard_indexer` from Col/Row Parallel Linear (Simplify Weight Loading) ( vllm-project#5928 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [ Bugfix ] Enabling Loading Models With Fused QKV/MLP on Disk with FP8 ( vllm-project#5921 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* Support Deepseek-V2 ( vllm-project#4650 )
Co-authored-by: Philipp Moritz <[email protected]>
* [Bugfix] Only add `Attention.kv_scale` if kv cache quantization is enabled ( vllm-project#5936 )
* Unmark more files as executable ( vllm-project#5962 )
* [Bugfix] Fix Engine Failing After Invalid Request - AsyncEngineDeadError ( vllm-project#5963 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Kernel] Flashinfer for prefill & decode, with Cudagraph support for decode ( vllm-project#4628 )
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
* [Bugfix][TPU] Fix TPU sampler output ( vllm-project#5978 )
* [Bugfix][TPU] Fix pad slot id ( vllm-project#5977 )
* [Bugfix] fix missing last itl in openai completions benchmark ( vllm-project#5926 )
* [Misc] Extend vLLM Metrics logging API ( vllm-project#5925 )
Co-authored-by: Antoni Baum <[email protected]>
* [Kernel] Add punica dimensions for Granite 3b and 8b ( vllm-project#5930 )
Signed-off-by: Joe Runde <[email protected]>
* [Bugfix] Fix precisions in Gemma 1 ( vllm-project#5913 )
* [Misc] Update Phi-3-Vision Example ( vllm-project#5981 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix] Support `eos_token_id` from `config.json` ( vllm-project#5954 )
* [Core] Optimize `SequenceStatus.is_finished` by switching to IntEnum ( vllm-project#5974 )
* [Kernel] Raise an exception in MoE kernel if the batch size is larger then 65k ( vllm-project#5939 )
* [ CI/Build ] Added E2E Test For Compressed Tensors ( vllm-project#5839 )
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [CI/Build] Add TP test for vision models ( vllm-project#5892 )
* [ CI/Build ] LM Eval Harness Based CI Testing ( vllm-project#5838 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [Bugfix][CI/Build][Hardware][AMD] Install matching torchvision to fix AMD tests ( vllm-project#5949 )
* [CI/Build] Temporarily Remove Phi3-Vision from TP Test ( vllm-project#5989 )
* [CI/Build] Reuse code for checking output consistency ( vllm-project#5988 )
* [CI/Build] [3/3] Reorganize entrypoints tests ( vllm-project#5966 )
* [ci][distributed] fix device count call
[ci][distributed] fix some cuda init that makes it necessary to use spawn ( vllm-project#5991 )
* [Frontend]: Support base64 embedding ( vllm-project#5935 )
Co-authored-by: Cyrus Leung <[email protected]>
* [Lora] Use safetensor keys instead of adapter_config.json to find unexpected modules. ( vllm-project#5909 )
Co-authored-by: sang <[email protected]>
* [ CI ] Temporarily Disable Large LM-Eval Tests ( vllm-project#6005 )
Co-authored-by: [email protected] <rshaw@neuralmagic>
* [Misc] Fix `get_min_capability` ( vllm-project#5971 )
* [ Misc ] Refactor w8a8 to use `process_weights_after_load` (Simplify Weight Loading) ( vllm-project#5940 )
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
* [misc][cuda] use nvml to avoid accidentally cuda initialization ( vllm-project#6007 )
* [Speculative Decoding 2/2 ] Integrate typical acceptance sampler into Spec Decode Worker ( vllm-project#5348 )
* Revert test changes
* cleanup
* llm engine cleanup
* utils.py cleanup
* custom ops refactor
* move xops to ops
* remove vllm/hpu/attn_bias.py
* whitespace fix
* revert accidental changes in rmsnorm
* Fix hpugraph hashing
* add trim_attn_metadata comment
* fix prompt bucketing:
* [ CI ] Re-enable Large Model LM Eval ( vllm-project#6031 )
* [doc][misc] remove deprecated api server in doc ( vllm-project#6037 )
* [Misc] update benchmark backend for scalellm ( vllm-project#6018 )
* [doc][misc] further lower visibility of simple api server ( vllm-project#6041 )
Co-authored-by: Simon Mo <[email protected]>
* [Bugfix] Use RayActorError for older versions of Ray in RayTokenizerGroupPool ( vllm-project#6039 )
* [Bugfix] adding chunking mechanism to fused_moe to handle large inputs ( vllm-project#6029 )
* add FAQ doc under 'serving' ( vllm-project#5946 )
* [Bugfix][Doc] Fix Doc Formatting ( vllm-project#6048 )
* [Bugfix] Add explicit `end_forward` calls to flashinfer ( vllm-project#6044 )
* [BugFix] Ensure worker model loop is always stopped at the right time ( vllm-project#5987 )
* [Frontend] Relax api url assertion for openai benchmarking ( vllm-project#6046 )
* [Model] Changes to MLPSpeculator to support tie_weights and input_scale ( vllm-project#5965 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
* [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) ( vllm-project#5602 )
* [Frontend] Add template related params to request ( vllm-project#5709 )
* [VLM] Remove `image_input_type` from VLM config ( vllm-project#5852 )
Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Doc] Reinstate doc dependencies ( vllm-project#6061 )
* guard model loader wa for hpu
---------
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Lei Wen <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Signed-off-by: Xiaowei Jiang <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Jianan Gu <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Philipp Moritz <[email protected]>
Co-authored-by: Antoni Baum <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Allen.Dou <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: SangBin Cho <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: Charles Riggins <[email protected]>
Co-authored-by: Liqian Chen <[email protected]>
Co-authored-by: zhyncs <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Bruce Fontaine <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Chang Su <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Ronen Schaffer <[email protected]>
Co-authored-by: sergey-tinkoff <[email protected]>
Co-authored-by: milo157 <[email protected]>
Co-authored-by: Shukant Pal <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: DearPlanet <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Jee Li <[email protected]>
Co-authored-by: rohithkrn <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Woo-Yeon Lee <[email protected]>
Co-authored-by: Matt Wong <[email protected]>
Co-authored-by: aws-patlange <[email protected]>
Co-authored-by: Stephanie Wang <[email protected]>
Co-authored-by: Stephanie <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: sasha0552 <[email protected]>
Co-authored-by: Chip Kerchner <[email protected]>
Co-authored-by: Abhinav Goyal <[email protected]>
Co-authored-by: xwjiang2010 <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
Co-authored-by: wangding zeng <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
Co-authored-by: mcalman <[email protected]>
Co-authored-by: William Lin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: llmpros <[email protected]>
Co-authored-by: sang <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: James Whedbee <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: danieljannai21 <[email protected]> xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 8, 2024 [Hardware][Intel] Optimize CPU backend and add more performance tips ( v… … 233bf00 …llm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]> xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Hardware][Intel] Optimize CPU backend and add more performance tips ( v… … 610215e …llm-project#4971 )
Co-authored-by: Jianan Gu <[email protected]> awangzy mentioned this pull request Mar 11, 2025 [Doc]: Does vllm CPU backend support Intel AMX? #14603 Open 1 task Copy link ivanbaldo commented Jul 30, 2025 So with this, AVX2-only CPUs are supported? Can it be used with the images at public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.10.0? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:46
|
8d75fe48ca5f46b7af0f5201d8500b9604eed769
|
https://github.com/vllm-project/vllm/pull/5183
| false | true | true | true |
PERF: TTFT, TTFT, qps | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, testing
|
Copy link Collaborator tlrmchlsmth commented Jun 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8 see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and #5144 for comparisons across different GEMM sizes. PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 comaniac reacted with thumbs up emoji All reactions 👍 1 reaction Switch fp8 layers to use the cutlass kernels b6809fa Copy link Collaborator robertgshaw2-redhat commented Jun 1, 2024 @tlrmchlsmth models: https://huggingface.co/nm-testing/Meta-Llama-3-70B-Instruct-FP8 https://huggingface.co/neuralmagic/Meta-Llama-3-8B-Instruct-FP8 https://huggingface.co/nm-testing/Meta-Llama-3-8B-Instruct-FP8-KV << with Quantized KV Cache 👀 1 tlrmchlsmth reacted with eyes emoji All reactions 👀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . comaniac reviewed Jun 1, 2024 View reviewed changes vllm/model_executor/layers/quantization/fp8.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author tlrmchlsmth commented Jun 1, 2024 Just ran a quick sanity check for correctness. Output looks good on all three. I tried tensor_parallel_size=2 as well for the 70B model, and that looks good All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Narrow output 2e93b71 robertgshaw2-redhat reviewed Jun 1, 2024 View reviewed changes vllm/model_executor/layers/quantization/fp8.py Outdated return torch.narrow(output, 0, 0, x.shape[0]) # We use the CUTLASS kernels by default but they don't support bias yet if bias is None: Copy link Collaborator robertgshaw2-redhat Jun 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do we also do a branch if we are on ada lovelace and CUDA 12.1? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author tlrmchlsmth Jun 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment We will need to if on CUDA < 12.4. We also need a branch if on CUDA 11.8. @comaniac do you know if torch._scaled_mm is supported in that case? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator comaniac Jun 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I only know that it only supports SM89+. We can try to call this op with torch+cu118 to test out. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author tlrmchlsmth Jun 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The cutlass kernels need at least SM89 as well, for the record. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator comaniac Jun 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yeah that makes sense. Older architectures don't have native FP8 so we can't get speedup from them, which seems not necessary to be covered. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator robertgshaw2-redhat Jun 1, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Note: we already have a mechanism for determining if a LinearMethod can run on a specific cuda arch. The LinearMethod exposes get_min_capability which is called during model loading. https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/quantization/fp8.py#L46 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👀 1 tlrmchlsmth reacted with eyes emoji All reactions 👀 1 reaction Copy link Collaborator pcmoritz commented Jun 1, 2024 Did you run benchmarks to compare the end-to-end performance? ITL for different qps All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator robertgshaw2-redhat commented Jun 1, 2024 Did you run benchmarks to compare the end-to-end performance? ITL for different qps Not yet. But obviously need this before we merge 👍 3 tlrmchlsmth, pcmoritz, and comaniac reacted with thumbs up emoji All reactions 👍 3 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Jun 1, 2024 I'll do a benchmark on Monday anyways. btw it'd be great if this PR is rebased onto the latest main that includes all required changes (it's likely the case already I suppose All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author tlrmchlsmth commented Jun 1, 2024 btw it'd be great if this PR is rebased onto the latest main that includes all required changes (it's likely the case already I suppose It's on a very recent main (from this morning) so it's good to use as is. In particular both #5144 and #5137 were needed for the switchover and they are both in. 👍 1 comaniac reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Merge branch 'upstream-main' into tms/use_cutlass_4_fp8 33085d9 robertgshaw2-redhat reviewed Jun 3, 2024 View reviewed changes vllm/_custom_ops.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Jun 6, 2024 @tlrmchlsmth @robertgshaw2-neuralmagic per offline discussion, this PR should be ok to go at least for now? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author tlrmchlsmth commented Jun 6, 2024 Yeah, let's get it landed. It needs to check a few more cases for falling back to scaled_mm. I'll get to that today and then mark it ready for review 👍 1 comaniac reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth added 2 commits June 6, 2024 20:46 Merge branch 'upstream-main' into tms/use_cutlass_4_fp8 43e5bd1 guard against calling cutlass when not supported 81f5372 robertgshaw2-redhat reviewed Jun 6, 2024 View reviewed changes vllm/model_executor/layers/quantization/fp8.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth added 2 commits June 6, 2024 21:59 format 1fe0468 check support during __init__ 2d77ca5 tlrmchlsmth marked this pull request as ready for review June 6, 2024 22:05 tlrmchlsmth added 2 commits June 6, 2024 22:14 Make that function standalone a1ffa09 format e894b21 robertgshaw2-redhat approved these changes Jun 6, 2024 View reviewed changes Copy link Collaborator robertgshaw2-redhat left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions pcmoritz approved these changes Jun 7, 2024 View reviewed changes Copy link Collaborator pcmoritz left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Great work and thanks for adding the benchmarks :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions comaniac approved these changes Jun 7, 2024 View reviewed changes Copy link Collaborator comaniac left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM. Thanks! I also did some benchmarks with this PR. Note that all results are in TP=4 on H100 and with chunked prefill enabled (this is just my own requirement). Prompts are 550 tokens, decoding 150 tokens. Model QPS scaled_mm-ITL cutlass-ITL scaled_mm-TTFT cutlass-TTFT Llama-3-70B 1 17.3 16.3 68.7 68.7 Llama-3-70B 4 22.7 21.2 72.3 72.6 Llama-3-70B 8 35.9 33.6 83.1 81.2 Mixtral-8x7B 1 9.1 8.9 43.1 40.7 Mixtral-8x7B 4 11.4 10.7 42.6 38.4 Mixtral-8x7B 8 15.6 14.3 43.4 42.8 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 pcmoritz reacted with thumbs up emoji All reactions 👍 1 reaction pcmoritz enabled auto-merge (squash) June 7, 2024 00:32 pcmoritz merged commit 8d75fe4 into vllm-project : main Jun 7, 2024 cli99 mentioned this pull request Jun 7, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS #5352 Merged Copy link Contributor cli99 commented Jun 7, 2024 @tlrmchlsmth Awesome work! Was trying this but ran into a problem when checking the cutlass fp8 support. Made a fix that works in my case in #5352 . All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . pcmoritz pushed a commit
that referenced
this pull request Jun 8, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( #5352 ) … e69ded7 Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in #5183 . dtrifiro pushed a commit
to opendatahub-io/vllm
that referenced
this pull request Jun 10, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … 80ec81e )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. dtrifiro pushed a commit
to opendatahub-io/vllm
that referenced
this pull request Jun 10, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( vllm-project#5352 ) … 978a73a Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project#5183 . robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Jun 11, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … ed99ec9 )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Jun 11, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( vllm-project#5352 ) … e349c2d Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project#5183 . joerunde pushed a commit
to IBM/vllm
that referenced
this pull request Jun 13, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS (#5352) … e0c6dc7 Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project/vllm#5183 . tlrmchlsmth deleted the tms/use_cutlass_4_fp8 branch June 14, 2024 17:20 joerunde pushed a commit
to joerunde/vllm
that referenced
this pull request Jun 17, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … df50941 )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jun 27, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … 2e9ab5b )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jun 27, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( vllm-project#5352 ) … ecdf6ef Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project#5183 . xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 8, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … 08faea8 )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 8, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( vllm-project#5352 ) … eb6d8a6 Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project#5183 . xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Kernel] Switch fp8 layers to use the CUTLASS kernels ( vllm-project#5183 … e9a71eb )
Switching from torch._scaled_mm to vLLM's cutlass fp8 kernels when supported as we are seeing 5-15% improvement in e2e performance on neuralmagic/Meta-Llama-3-8B-Instruct-FP8
see https://docs.google.com/spreadsheets/d/1GiAnmzyGHgZ6zL_LDSTm35Bdrt4A8AaFEurDlISYYA4/ for some quick e2e benchmarks and vllm-project#5144 for comparisons across different GEMM sizes. xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jul 24, 2024 [Bug Fix] Fix the support check for FP8 CUTLASS ( vllm-project#5352 ) … c975075 Bug description:
With torch 2.4.0.dev20240603+cu121,
cutlass_fp8_supported outputs False, and the (capability, version) before the comparison is (90, 11111111112)
This PR fixes the support check for FP8 CUTLASS ( cutlass_fp8_supported) which was introduced in vllm-project#5183 . Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:50
|
8bc68e198c4c90ddc2e54fa76eb81c2c714bb1cd
|
https://github.com/vllm-project/vllm/pull/4208
| false | false | true | true |
SERVING: API server, OpenAI API server, Frontend | TEST: test, Test, test
|
Copy link Collaborator sangstar commented Apr 19, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Automatically detect vLLM-tensorized model, update tensorizer to version 2.9.0 This PR accomplishes several things: Updates docstrings to account for tensorizer refactor in [Core] Refactor model loading code #4097 in the tensorize_vllm_examples.py example script, and slight corrections to the docstrings of the new, refactored functions. Allows models to be automatically inferred as a vLLM-tensorized model . Accomplishes this by placing a meta-tensor "footprint" in the serialized model, and removing it at runtime. vllm_tensorized as an arg has been removed. Updates tensorizer to the full release of 2.9.0. PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions sangstar added 15 commits April 18, 2024 16:32 perf: Update tensorizer versions to new release 97131c0 perf: Update tensorizer versions to new release 1ba6bc5 docs: Remove unnecessary comma 5e58d6f refactor: (WIP) Allow detection of vLLM-tensorized model … 62006f9 WIP because the codes needs to be cleaned up, and the current work
refactoring the example script in to importable functions from
`tensorizer.py` is still in progress, which will allow for better
forward compatibility and better testing. tests: Add testing for vLLM-tensorized model has same output cbeb2cb tests: Fix redundant variables a80b5ce perf: Update example script, add logging for deserialization 1486dcd tests: Get tests to pass e019350 docs: Update docs to reflect accurate function descriptions 31a5076 Run yapf and ruff d68f128 chore: Remove todo 287bfbb chore: Fix yapf formatting f3393bd chore: Disable yapf from interfering with isort for testing script 04c78bf chore: Disable yapf at testing script import block 9658a1a fix: Instantiate load partials only when tensorizer imported 96af687 Copy link Collaborator Author sangstar commented Apr 22, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @Yard1 @ywang96 Some QoL improvements for tensorizer and some corrected docstrings (as per the great refactor from @Yard1 ), and an update for tensorizer as version 1.9.0 is officially released. No longer need to specify if a model is vLLM-tensorized beforehand, as I've implemented a way for this to be inferred implicitly by registering a meta tensor into the model during serialization with a vllm-tensorized-marker and removing it during deserialization. 🚀 1 Yard1 reacted with rocket emoji All reactions 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sangstar added 5 commits April 22, 2024 14:29 Merge remote-tracking branch 'upstream/main' into sangstar/tensorizer… … 5890ded …-update
# Conflicts:
# docs/source/models/engine_args.rst perf: Update and streamline docs on tensorizing a vLLM model b702901 docs: Correct docstring, add tensorizer docs link for more info 43a298a docs: Fix S3_ENDPOINT_URL naming 2a61b9a docs: Additionally fix S3_ENDPOINT_URL naming on example script 2b2012a Copy link Collaborator Author sangstar commented Apr 29, 2024 Further made some improvements with documentation. Important fixes explaining how to use tensorizer with the refactored changes (as the example script predates the refactor) so hoping to get eyes on this! Cheers :D @ywang96 @Yard1 👍 1 ywang96 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sangstar added 5 commits April 29, 2024 10:17 tests: Add tensorize_vllm_model.py to Examples Test for regression a1b5971 Merge remote-tracking branch 'upstream/main' into sangstar/tensorizer… … 6e7bfae …-update Run yapf and ruff, update docs 77817d1 perf: Force serialization and deserialization test in example script 19495cf fix: Not double-initiating model for deserialize case in example 1fe66be sangstar mentioned this pull request May 3, 2024 [Frontend] [Core] feat: Add model loading using tensorizer #3476 Merged Copy link Member ywang96 commented May 4, 2024 Will take a look once I have some bandwidth - thanks for the continuous contribution to vLLM! ❤️ 1 sangstar reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ywang96 self-assigned this May 4, 2024 sangstar added 2 commits May 6, 2024 09:28 Merge remote-tracking branch 'upstream/main' into sangstar/tensorizer… … 449753c …-update
# Conflicts:
# requirements-dev.txt
# setup.py
# tests/tensorizer_loader/tensorize_vllm_model_for_testing.py chore: Update initializing env 9c2f7f8 bbrowning reviewed May 9, 2024 View reviewed changes examples/tensorize_vllm_model.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . ywang96 reviewed May 12, 2024 View reviewed changes Copy link Member ywang96 left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thank you @sangstar for the continuous contribution! I left some questions. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 sangstar reacted with heart emoji All reactions ❤️ 1 reaction examples/tensorize_vllm_model.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/model_loader/tensorizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/model_loader/loader.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . chore: Reallow vllm_tensorized parameter, envs fix 246f636 sangstar requested a review
from ywang96 May 12, 2024 13:02 sangstar added 3 commits May 12, 2024 09:09 Merge remote-tracking branch 'refs/remotes/upstream/main' into sangst… … a86ab10 …ar/tensorizer-update chore: Install tensorizer for Examples Test 829e24b style: Remove trailing whitespace 7271ea2 Copy link Collaborator Author sangstar commented May 12, 2024 @ywang96 Resolved comments! Let me know if anything else is needed. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sangstar added 2 commits May 13, 2024 13:56 Merge remote-tracking branch 'upstream/main' into sangstar/tensorizer… … ac7341e …-update
# Conflicts:
# vllm/model_executor/model_loader/loader.py Run yapf and ruff 0abbe10 ywang96 reviewed May 13, 2024 View reviewed changes vllm/model_executor/model_loader/tensorizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . sangstar requested a review
from ywang96 May 13, 2024 19:48 Copy link Collaborator Author sangstar commented May 13, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @ywang96 Resolved comments! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ywang96 approved these changes May 13, 2024 View reviewed changes Copy link Member ywang96 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment 🚀 LGTM! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 1 sangstar reacted with rocket emoji All reactions 🚀 1 reaction Copy link Collaborator Author sangstar commented May 13, 2024 @ywang96 Checks passed and ready to merge! 😄 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ywang96 merged commit 8bc68e1 into vllm-project : main May 13, 2024 sangstar deleted the sangstar/tensorizer-update branch May 14, 2024 14:04 robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request May 19, 2024 [Frontend] [Core] perf: Automatically detect vLLM-tensorized model, u… … 7dd2e73 …pdate `tensorizer` to version 2.9.0 ( vllm-project#4208 ) dtrifiro pushed a commit
to dtrifiro/vllm
that referenced
this pull request May 21, 2024 [Frontend] [Core] perf: Automatically detect vLLM-tensorized model, u… … 64d2fdc …pdate `tensorizer` to version 2.9.0 ( vllm-project#4208 ) sangstar mentioned this pull request Jun 13, 2024 [Doc] Update documentation on Tensorizer #5471 Merged sangstar mentioned this pull request Jun 20, 2025 [Frontend] [Core] Integrate Tensorizer in to S3 loading machinery, allow passing arbitrary arguments during save/load #19619 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:54
|
379da6dcb5f5d062d0452b2fc23291e5113dcf04
|
https://github.com/vllm-project/vllm/pull/4691
| false | true | true | true |
PERF: qps, qps, qps | SERVING: API server, OpenAI API server, Frontend | TEST: test, testing, CI
|
Copy link Collaborator pcmoritz commented May 8, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR improves the FP8 performance of linear layers, which had been lacking before ( #4118 (comment) and #4118 (comment) ). We noticed that CUBLASLt can find a better algorithm if the first dimension of the matrix is at least 16. So this PR enlarges matrices appropriately during quantization. This improves FP8 performance and removes the performance regression vs. FP16, in many cases exceeding FP16 performance. Here are benchmarks on llama3 70b (ITL numbers for 1000 input and 50 output tokens at fixed qps and at TP 4), all FP8 measurements are for dynamic quantization: qps = 1: 24 ms (FP8, this PR), 32 ms (FP8, previous main), 26 ms (FP16)
qps = 2: 26 ms (FP8, this PR), 34ms (FP8, previous main), 28 ms (FP16)
qps = 4: 33 ms (FP8, this PR), 44 ms (FP8, previous main), 36 ms (FP16)
qps = 6: 46 ms (FP8, this PR), 56 ms (FP8, previous main), 54 ms (FP16)
qps = 8: 85 ms (FP8, this PR), 85 ms (FP8, previous main), 138 ms (FP16) PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 comaniac, AnyISalIn, and mgoin reacted with thumbs up emoji All reactions 👍 3 reactions pcmoritz added 9 commits May 8, 2024 13:04 Initial commit d6b8e14 fix 3b77b56 adapt fp8 matmul code to use batch_dim_padding 5a0f28b Merge branch 'fp8-gemm-performance' of github.com:pcmoritz/vllm-publi… … 91f544f …c into fp8-gemm-performance add docstring b435641 format 6178aa3 yapf 99ef55f comments 8373dad format be94800 pcmoritz requested review from comaniac and robertgshaw2-redhat May 8, 2024 21:45 tlrmchlsmth approved these changes May 8, 2024 View reviewed changes vllm/model_executor/layers/quantization/fp8.py Outdated Comment on lines 236 to 240 batch_dim_padding=32) # Fused GEMM_DQ # Fused GEMM_DQ -- note we padded the input above because # torch._scaled_mm is more performant for matrices with # batch dimension at least 32. Copy link Collaborator tlrmchlsmth May 8, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment What is the perf effect when padding to 32 vs 16? (I ask because here it's 32 and in the PR description it's 16) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author pcmoritz May 8, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment So when I write my own wrappers for CUBLASLt, I'm getting the following error when calling the cublasLtMatmulAlgoGetHeuristic with FP8: [2024-05-08 19:00:00][cublasLt][1533][Info][cublasLtMatmulAlgoGetHeuristic] Unsupported M dimension for FP8 matrix multiplication. M must be divisible by 16. Got 2. (this is with highest logging CUBLASLT_LOG_LEVEL=5 ) -- that's why I wrote 16 in the description. For the setting we are using however, 32 is actually the best setting -- I tried them both and with 16 it is much closer to what it was previously. It is however possible that this will change in the future (e.g. once we use FP8 outputs I think things will change). Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 tlrmchlsmth reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator Author pcmoritz May 8, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I clarified this in the description now -- I wrote 16 since I didn't want to bias people for the future :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 tlrmchlsmth, mgoin, and comaniac reacted with thumbs up emoji All reactions 👍 3 reactions Copy link Contributor courage17340 May 9, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Hello, I have two questions: I never saw the M must be divisible by 16 error when testing cublasLt. In fact, I can perform 1 x 1024 x 16 matmul with torch._scaled_mm . But it seems that there are some constraints on N, cublasLt requires N % 8 == 0 , while torch requires N % 16 == 0 . I guess your error is also on N, because cublasLt api is col major and we pass N as M to it when using row major tensors. In my experiment, matmul is slower when M is in range [1, 16], and is faster in range [17, 32], so maybe 17 is a better choice instead of 32? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author pcmoritz May 9, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the suggestion, let me try if 17 is better than 32 :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author pcmoritz May 9, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I found the performance of 17 to be exactly the same as the performance of 32 , so I'll switch to 17 since it uses less memory. Thanks for the suggestion @courage17340 :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions robertgshaw2-redhat approved these changes May 8, 2024 View reviewed changes Copy link Collaborator robertgshaw2-redhat commented May 8, 2024 kinda wild - I suspect we will be able to improve performance significantly with our kernels All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat enabled auto-merge (squash) May 8, 2024 22:18 mgoin approved these changes May 8, 2024 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This makes sense! :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions comaniac approved these changes May 9, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Great! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions pcmoritz added 5 commits May 8, 2024 18:27 rerun ci 6b87e6f rerun ci 6a4e533 rerun ci af7b9f7 rerun ci 4cc1991 Merge branch 'main' into fp8-gemm-performance 3462ba7 pcmoritz disabled auto-merge May 9, 2024 20:26 pcmoritz added 2 commits May 9, 2024 13:47 use 17 instead of 32 9bafda5 rerun ci 3439917 pcmoritz merged commit 379da6d into vllm-project : main May 9, 2024 robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request May 19, 2024 [Kernel] [FP8] Improve FP8 linear layer performance ( vllm-project#4691 ) … 56c100c This PR improves the FP8 performance of linear layers, which had been lacking before ( vllm-project#4118 (comment) and vllm-project#4118 (comment)).
We noticed that CUBLASLt can find a better algorithm if the first dimension of the matrix is greater than 16. So this PR enlarges matrices appropriately during quantization. This improves FP8 performance and removes the performance regression vs. FP16, in many cases exceeding FP16 performance.
Here are benchmarks on llama3 70b (ITL numbers for 1000 input and 50 output tokens at fixed qps and at TP 4), all FP8 measurements are for dynamic quantization:
qps = 1: 24 ms (FP8, this PR), 32 ms (FP8, previous main), 26 ms (FP16)
qps = 2: 26 ms (FP8, this PR), 34ms (FP8, previous main), 28 ms (FP16)
qps = 4: 33 ms (FP8, this PR), 44 ms (FP8, previous main), 36 ms (FP16)
qps = 6: 46 ms (FP8, this PR), 56 ms (FP8, previous main), 54 ms (FP16)
qps = 8: 85 ms (FP8, this PR), 85 ms (FP8, previous main), 138 ms (FP16) dtrifiro pushed a commit
to dtrifiro/vllm
that referenced
this pull request May 21, 2024 [Kernel] [FP8] Improve FP8 linear layer performance ( vllm-project#4691 ) … d7e6b3f This PR improves the FP8 performance of linear layers, which had been lacking before ( vllm-project#4118 (comment) and vllm-project#4118 (comment)).
We noticed that CUBLASLt can find a better algorithm if the first dimension of the matrix is greater than 16. So this PR enlarges matrices appropriately during quantization. This improves FP8 performance and removes the performance regression vs. FP16, in many cases exceeding FP16 performance.
Here are benchmarks on llama3 70b (ITL numbers for 1000 input and 50 output tokens at fixed qps and at TP 4), all FP8 measurements are for dynamic quantization:
qps = 1: 24 ms (FP8, this PR), 32 ms (FP8, previous main), 26 ms (FP16)
qps = 2: 26 ms (FP8, this PR), 34ms (FP8, previous main), 28 ms (FP16)
qps = 4: 33 ms (FP8, this PR), 44 ms (FP8, previous main), 36 ms (FP16)
qps = 6: 46 ms (FP8, this PR), 56 ms (FP8, previous main), 54 ms (FP16)
qps = 8: 85 ms (FP8, this PR), 85 ms (FP8, previous main), 138 ms (FP16) Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:48:57
|
d7740ea4dcee4ab75d7d6eef723f33cae957b288
|
https://github.com/vllm-project/vllm/pull/4594
| false | true | true | true |
PERF: Throughput, Throughput, Throughput | SERVING: API server, OpenAI API server, Frontend | TEST: test, CI, continuous integration
|
Copy link Collaborator rkooo567 commented May 4, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . get_logprobs happen after sampling, which is the point where GPU <> CPU sync happens. It means overhead from get_logprobs are going to be applied to e2e overhead. I found get_logprobs is pretty inefficient at large batch size, which could be pretty common. On batch size 256, get_logprobs take about 5~6ms. This optimizes the get_logprobs. After this, I found the overhead becomes 2.1ms for get_logprobs. There are 2 optimizations Use non blocking device transfer and call it at the right timing where it can overlap with gpu ops Preselect indices and call tolist() instead of repetitively calling .item (which is much slower) Throughput benchmark (--input-len 256 --output-len 256)
Before: Throughput: 23.84 requests/s, 12208.54 tokens/s
After: Throughput: 25.77 requests/s, 13196.11 tokens/s PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions rkooo567 added 4 commits May 3, 2024 20:49 working 30d6fe4 Merge branch 'main' into logprob-opt 65f9dde . 9205244 done 8ad363e rkooo567 changed the title [WIP] Optimize sampler get_logprobs [Core] Optimize sampler get_logprobs May 7, 2024 rkooo567 commented May 7, 2024 View reviewed changes vllm/model_executor/layers/sampler.py Outdated @@ -769,27 +769,24 @@ def _get_logprobs( selected_logprobs = logprobs[[ query_indices_gpu, next_token_ids_gpu, ]] ]] .to('cpu', non_blocking=True) Copy link Collaborator Author rkooo567 May 4, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment this can overlap device transfer with torch.topk Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions done 2177a7a Yard1 approved these changes May 7, 2024 View reviewed changes Copy link Collaborator Yard1 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 rkooo567 reacted with heart emoji All reactions ❤️ 1 reaction Copy link Collaborator Author rkooo567 commented May 7, 2024 thanks for the quick review @Yard1 ! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . rkooo567 added 2 commits May 6, 2024 22:44 done d384dae . 2b2035a Qubitium reviewed May 7, 2024 View reviewed changes vllm/model_executor/layers/sampler.py Outdated # Find prompt/sample logprobs. prompt_logprobs_per_seq_group: List[Optional[PromptLogprobs]] = [] sample_logprobs_per_seq_group: List[SampleLogprobs] = [] top_logprob_idx = 0 selected_logprobs_idx = 0 # Make sure non-blocking .to("cpu", non_blocking=True) is finished assert selected_logprobs.shape[0] == ranks.shape[0] Copy link Contributor Qubitium May 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @rkooo567 Do we still need this assert since non-blocking transfer code is removed? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author rkooo567 May 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for catching! we don't need comments, but assert is kind of still needed. Removed the comment Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 Qubitium reacted with thumbs up emoji All reactions 👍 1 reaction rkooo567 commented May 7, 2024 View reviewed changes Copy link Collaborator Author rkooo567 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Update; non_blocking=True for GPU -> CPU doesn't guarantee to synchronize when tolist() is called, so it is not safe. I used the blocking op instead. This decreases the perf improvement a bit (0.5~ish) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/sampler.py Outdated # Find prompt/sample logprobs. prompt_logprobs_per_seq_group: List[Optional[PromptLogprobs]] = [] sample_logprobs_per_seq_group: List[SampleLogprobs] = [] top_logprob_idx = 0 selected_logprobs_idx = 0 # Make sure non-blocking .to("cpu", non_blocking=True) is finished assert selected_logprobs.shape[0] == ranks.shape[0] Copy link Collaborator Author rkooo567 May 7, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for catching! we don't need comments, but assert is kind of still needed. Removed the comment Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 Qubitium reacted with thumbs up emoji All reactions 👍 1 reaction done a964163 Yard1 reviewed May 7, 2024 View reviewed changes vllm/model_executor/layers/sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . . 88c0567 simon-mo merged commit d7740ea into vllm-project : main May 8, 2024 z103cb pushed a commit
to z103cb/opendatahub_vllm
that referenced
this pull request May 9, 2024 [Core] Optimize sampler get_logprobs ( vllm-project#4594 ) 4ae5247 Copy link davidthomas426 commented May 9, 2024 Update; non_blocking=True for GPU -> CPU doesn't guarantee to synchronize when tolist() is called, so it is not safe. I used the blocking op instead. This decreases the perf improvement a bit (0.5~ish) As an alternative, you could use a cuda stream for this and do a stream synchronize before the tolist, or just forget the separate cuda stream and just use a full torch cuda synchronize if that wouldn't create a performance issue. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request May 19, 2024 [Core] Optimize sampler get_logprobs ( vllm-project#4594 ) 43bc7e9 dtrifiro pushed a commit
to dtrifiro/vllm
that referenced
this pull request May 21, 2024 [Core] Optimize sampler get_logprobs ( vllm-project#4594 ) 9e4b2e2 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:00
|
2a052011ca473a9dc8160f3daa1f5f63a2ad1fe3
|
https://github.com/vllm-project/vllm/pull/4527
| false | false | true | true |
SERVING: API server, OpenAI API server, Frontend | TEST: test, test, test
|
Copy link Member mgoin commented May 1, 2024 • edited by pcmoritz Loading Uh oh! There was an error while loading. Please reload this page . Follow on to #4332 to enable FP8 checkpoint loading for Mixtral and supersedes #4436 . This PR enables the following checkpoint loading features for Mixtral: Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model Supports static or dynamic activation quantization with static weight quantization (all per tensor) Supports different scales for each expert weight Supports Fp8 in QKV layer Notes: The Expert Gate/Router always runs at half / full precision for now. If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance. Future work: cutlass kernels for separate QKV weight scales support memory compression from loading fp16 checkpoints and dynamically quantizing to fp8 (blocked on weight loader refactor) generalize MoE implementation to apply to other MoE models Smoke test output: python test-mixtral-fp8.py
WARNING 05-03 01:42:29 config.py:187] fp8 quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 05-03 01:42:29 llm_engine.py:100] Initializing an LLM engine (v0.4.1) with config: model='nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8', speculative_config=None, tokenizer='nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
INFO 05-03 01:42:29 utils.py:623] Found nccl from library /home/paperspace/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 05-03 01:42:30 selector.py:75] Cannot use FlashAttention-2 backend because the flash_attn package is not found. Please install it for better performance.
INFO 05-03 01:42:30 selector.py:31] Using XFormers backend.
WARNING 05-03 01:42:31 fp8.py:29] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
INFO 05-03 01:42:31 weight_utils.py:199] Using model weights format ['*.safetensors']
WARNING 05-03 01:42:41 utils.py:428] Found act_scales that are not equal for fp8 MoE layer. Using the maximum across experts for each layer.
INFO 05-03 01:42:42 model_runner.py:172] Loading model weights took 43.7487 GB
INFO 05-03 01:42:51 gpu_executor.py:114] # GPU blocks: 9689, # CPU blocks: 2048
INFO 05-03 01:42:53 model_runner.py:872] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 05-03 01:42:53 model_runner.py:876] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 05-03 01:43:00 model_runner.py:953] Graph capturing finished in 7 secs.
Processed prompts: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.30it/s]
Prompt: 'Hello, my name is', Generated text: ' Alyssa and I am a 17-year-old girl'
Prompt: 'The president of the United States is', Generated text: ' the head of the executive branch of the United States government and is the highest political'
Prompt: 'The capital of France is', Generated text: " a beautiful and historic city that is home to some of the world's most"
Prompt: 'The future of AI is', Generated text: ' a rapidly evolving field, with new developments and innovations happening all the time' PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added 3 commits May 1, 2024 14:16 [Kernel] Support Fp8 Checkpoints for Mixtral (Dynamic + Static … b5002df Activations) Cleanup 4378f4f Fix circular import with all_close_1d ce2051a robertgshaw2-redhat mentioned this pull request May 1, 2024 [Kernel] Support Fp8 Checkpoints for Mixtral (Dynamic + Static) #4436 Closed comaniac approved these changes May 1, 2024 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/mixtral.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat mentioned this pull request May 1, 2024 v0.4.2 Release Tracker #4505 Closed pcmoritz reviewed May 1, 2024 View reviewed changes vllm/model_executor/models/mixtral.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . pcmoritz reviewed May 1, 2024 View reviewed changes vllm/model_executor/models/mixtral.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . mgoin added 3 commits May 2, 2024 14:53 Merge branch 'main' into fp8-mixtral-serialization 1dc1d2d Address review 56ff89c Fix test 66febef pcmoritz reviewed May 4, 2024 View reviewed changes vllm/model_executor/models/mixtral.py # ACT_SCALE (for fp8) if quant_config.activation_scheme == "static": if not quant_config.is_checkpoint_fp8_serialized: Copy link Collaborator pcmoritz May 4, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This needs to be removed -- we do support activation scales for FP16 checkpoints too (same as kv store scales going forward) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator pcmoritz May 4, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Ah never mind, I misunderstood -- FP16 checkpoints with "quantization": "fp8" are also considered fp8 serialized (this is pretty confusing) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions pcmoritz approved these changes May 4, 2024 View reviewed changes pcmoritz merged commit 2a05201 into vllm-project : main May 4, 2024 robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request May 6, 2024 [Kernel] Support MoE Fp8 Checkpoints for Mixtral (Static Weights with… … 55dd119 … Dynamic/Static Activations) ( vllm-project#4527 )
Follow on to vllm-project#4332 to enable FP8 checkpoint loading for Mixtral and supersedes vllm-project#4436 .
This PR enables the following checkpoint loading features for Mixtral:
Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model
Supports static or dynamic activation quantization with static weight quantization (all per tensor)
Supports different scales for each expert weight
Supports Fp8 in QKV layer
Notes:
The Expert Gate/Router always runs at half / full precision for now.
If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance. z103cb pushed a commit
to z103cb/opendatahub_vllm
that referenced
this pull request May 7, 2024 [Kernel] Support MoE Fp8 Checkpoints for Mixtral (Static Weights with… … ba2be94 … Dynamic/Static Activations) ( vllm-project#4527 )
Follow on to vllm-project#4332 to enable FP8 checkpoint loading for Mixtral and supersedes vllm-project#4436 .
This PR enables the following checkpoint loading features for Mixtral:
Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model
Supports static or dynamic activation quantization with static weight quantization (all per tensor)
Supports different scales for each expert weight
Supports Fp8 in QKV layer
Notes:
The Expert Gate/Router always runs at half / full precision for now.
If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance. dtrifiro pushed a commit
to opendatahub-io/vllm
that referenced
this pull request May 7, 2024 [Kernel] Support MoE Fp8 Checkpoints for Mixtral (Static Weights with… … 111b1a5 … Dynamic/Static Activations) ( vllm-project#4527 )
Follow on to vllm-project#4332 to enable FP8 checkpoint loading for Mixtral and supersedes vllm-project#4436 .
This PR enables the following checkpoint loading features for Mixtral:
Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model
Supports static or dynamic activation quantization with static weight quantization (all per tensor)
Supports different scales for each expert weight
Supports Fp8 in QKV layer
Notes:
The Expert Gate/Router always runs at half / full precision for now.
If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance. dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:03
|
ad8d696a99ca1eee19f1404e16e8e82df592ff85
|
https://github.com/vllm-project/vllm/pull/4270
| false | true | true | true |
PERF: throughput, Throughput, Throughput | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, CI
|
Copy link Collaborator rkooo567 commented Apr 22, 2024 After the scheduler refactoring PR, the scheduler iteration overhead became 2ms -> 11ms. The major overhead was coming from logger.debug added to schedule_running. The main issue I think was that fstring is always evaluated although logger.debug is used, which causes additional overhead. Adding a very small overhead (less than 5us) changes e2e throughput a lot for the scheduler. scheduler after fix
Throughput: 10.77 requests/s, 5514.02 tokens/s
iter takes 0.8~2.5ms
scheduler before regrssion
Throughput: 11.37 requests/s, 5821.86 tokens/s
iter takes 0.5~2ms
(5514.02 - 5821.86) / 5821.86 * 100 = -5.28 PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions rkooo567 added 5 commits April 22, 2024 02:20 ip 31c9e5b fix one issue ac78b77 , cc1b303 done d741157 done 915fdde rkooo567 changed the title Scheduler perf fix [Core] Scheduler perf fix Apr 22, 2024 rkooo567 mentioned this pull request Apr 22, 2024 [Core] Fix scheduler perf regression #4261 Closed simon-mo approved these changes Apr 22, 2024 View reviewed changes simon-mo enabled auto-merge (squash) April 22, 2024 16:33 Copy link Collaborator comaniac commented Apr 22, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This is a very good example of logging-fstring-interpolation (W1203) . Could you also try this to see if this performance is preserved? If so, we can enable logging-fstring-interpolation and logging-not-lazy to CI linting. logger.debug("add_seq_group %s", seq_group.request_id) 👍 4 simon-mo, richardliaw, AaronFriel, and rkooo567 reacted with thumbs up emoji All reactions 👍 4 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . fix test 0fce8f0 simon-mo merged commit ad8d696 into vllm-project : main Apr 22, 2024 Copy link Collaborator Author rkooo567 commented Apr 22, 2024 @comaniac let me try today! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cadedaniel mentioned this pull request Apr 23, 2024 [Core] Enable prefix caching with block manager v2 enabled #4142 Merged rkooo567 mentioned this pull request Apr 24, 2024 [CI] Disable non-lazy string operation on logging #4326 Merged Copy link Collaborator Author rkooo567 commented Apr 24, 2024 @comaniac #4326 Besides, I tried what you suggested on the scheduler, but somehow it is still slower (faster than fstring). So I guess using logger at all is not desirable in the scheduler All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator comaniac commented Apr 25, 2024 Thanks for trying that. I guess it means logger overhead cannot be ignored in some very intense places. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Apr 25, 2024 [Core] Scheduler perf fix ( vllm-project#4270 ) e8a65e2 robertgshaw2-redhat pushed a commit
to neuralmagic/nm-vllm
that referenced
this pull request Apr 26, 2024 [Core] Scheduler perf fix ( vllm-project#4270 ) 542dc70 alexeykondrat pushed a commit
to alexeykondrat/ci-vllm
that referenced
this pull request May 1, 2024 [Core] Scheduler perf fix ( vllm-project#4270 ) 81e9afe z103cb pushed a commit
to z103cb/opendatahub_vllm
that referenced
this pull request May 7, 2024 [Core] Scheduler perf fix ( vllm-project#4270 ) c10e074 dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:06
|
2f1928354903ae0c6edfe76cc90081eb513ead2c
|
https://github.com/vllm-project/vllm/pull/3890
| false | true | false | false |
PERF: latency, optimization
|
Copy link Member youkaichao commented Apr 7, 2024 Some code is inefficient. Find some equivalent but more efficient code. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 3 ywang96, zhuohan123, and anttttti reacted with rocket emoji All reactions 🚀 3 reactions avoid get_token_ids by len b8cadb3 youkaichao marked this pull request as draft April 7, 2024 01:25 cadedaniel approved these changes Apr 7, 2024 View reviewed changes youkaichao marked this pull request as ready for review April 7, 2024 02:13 youkaichao merged commit 2f19283 into vllm-project : main Apr 7, 2024 youkaichao deleted the latency_optimize branch April 7, 2024 02:14 z103cb pushed a commit
to z103cb/opendatahub_vllm
that referenced
this pull request Apr 22, 2024 [Core] latency optimization ( vllm-project#3890 ) 9d9b6c4 dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:09
|
b6d103542c654fb63013a1e45a586d654ae36a2a
|
https://github.com/vllm-project/vllm/pull/3662
| false | true | true | true |
PERF: latency, latency, latency | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, test
|
Copy link Contributor mawong-amd commented Mar 27, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR primarily creates optimized specializations of fused_add_rms_norm_kernel, used in many layernorms. It also includes a slightly optimized version of blockReduceSum/warpReduceSum which slightly reduce the number of shuffles done when the max block size is <=512 and known at compile time. It is observed that fused_add_rms_norm is memory latency bound under many scenarios. The optimized implementation primarily derives its benefits by Coalescing global memory transactions into larger operations, which reduces the number of stalls that need to be hidden. This is achieved by (implicitly) unrolling both of the for loops through the use of a vector struct. Using a smaller block size when the number of blocks dispatched is large, which allows more blocks to simultaneously fit onto execution units and hence improves latency hiding. The same ideas contained here can be applied to other relatively simple kernels which should be memory bound (e.g. some activation kernels). More performance numbers can be provided as they become available or if requested. The existing test suite appears sufficient, but additional tests can be created on request. Some examples of the speed up, as obtained by profiling via benchmark_latency on Llama2-70B (hidden size 8192), FP16, TP = 1, on MI300X: (input_len = output_len = batch_size = 128): Prefill improves to 305 ms from 440 ms. (input_len = 2048, output_len = 128, batch_size = 1): Prefill improves to 41 ms from 88 ms. For both cases above, decode improves to 7 ms from 11 ms. Another optimization attempted was the use of shared memory, which effectively converts a global memory load into a shared memory load/store pair per item. While this improves performance when applied to baseline, it was not observed to improve performance on top of the current optimizations. BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 cadedaniel and WoosukKwon reacted with thumbs up emoji All reactions 👍 2 reactions WoosukKwon self-assigned this Mar 28, 2024 WoosukKwon reviewed Mar 28, 2024 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @mawong-amd Thanks for submitting the PR! This optimization seems to be necessary for MI300x GPUs. Unfortunately, I didn't see noticeable e2e performance boost for A100 GPUs. Is this expected? Also, I'm a bit worried about whether the new kernels keep the semantics of the current kernels. Could you double check? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions csrc/reduction_utils.cuh Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . csrc/layernorm_kernels.cu Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . csrc/layernorm_kernels.cu Comment on lines +252 to +253 scalar_t z = input[blockIdx.x * hidden_size + idx]; z += residual[blockIdx.x * hidden_size + idx]; float x = (float) z; Copy link Collaborator WoosukKwon Mar 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Doesn't this change the semantics of the kernel since we do the addition in FP16/BF16 instead of FP32? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author mawong-amd Mar 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It does in theory, however I've not noticed any observable effects from doing the addition in lower precision so far (even the logprobs of generated sequences are identical). In terms of a possible increase in rounding error, this is likely still negligible compared to typical errors incurred during the reduction phase and in the approximate rsqrt. The benefit of doing the addition in FP16/BF16 is that it can be implemented as a packed operation. But this step shouldn't be a bottleneck in any case. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Mar 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I see, makes sense. Thanks for the explanation! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions cmake/utils.cmake Comment on lines +103 to +107 list(REMOVE_ITEM GPU_FLAGS "-D__CUDA_NO_HALF_OPERATORS__" "-D__CUDA_NO_HALF_CONVERSIONS__" "-D__CUDA_NO_BFLOAT16_CONVERSIONS__" "-D__CUDA_NO_HALF2_OPERATORS__") Copy link Collaborator WoosukKwon Mar 28, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Can this affect other CUDA kernels? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author mawong-amd Mar 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It could, but I haven't noticed any side effects and neither have the tests. The existing defines seem to originate from Torch's default defines as a legacy item and it's not clear to me if there's a good reason to retain them nowadays (e.g. seems like the recently added Punica extension similarly disables these defines). If this is a concern, we could either limit the scope of removing these defines to this file or use free functions instead of operators (e.g. __hadd/__hadd2 for __half/__half2 operator+). But this increases code bloat and non-portability even further: the current implementation is already compromised to an extent by the (deficient) headers provided by CUDA/HIP (neither __hadd/__hadd2 as free functions or "heterogeneous" operators like float2::operator*(float) are consistently implemented in CUDA, while conversion operators/constructors are not consistently implemented by both). Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Mar 30, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Got it. Thanks for the explanation! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added
the action-required label Mar 28, 2024 WoosukKwon removed their assignment Mar 28, 2024 mawong-amd changed the title [Kernel] Layernorm performance optimization [WIP][Kernel] Layernorm performance optimization Mar 28, 2024 mawong-amd changed the title [WIP][Kernel] Layernorm performance optimization [Kernel] Layernorm performance optimization Mar 28, 2024 Copy link Contributor Author mawong-amd commented Mar 28, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @mawong-amd Thanks for submitting the PR! This optimization seems to be necessary for MI300x GPUs. Unfortunately, I didn't see noticeable e2e performance boost for A100 GPUs. Is this expected? Also, I'm a bit worried about whether the new kernels keep the semantics of the current kernels. Could you double check? Hi, I managed to run a few performance tests on H100 last night and also observed that there was no speed up. I looked at the PTX and SASS assembly and NVCC was not fusing the loads/stores as expected. It appears NVCC needs to know these global memory ops are aligned on a 16 byte boundary to unlock the full 128-bit coalesced op; I've added this alignment requirement to the vector struct and now I'm observing similar speedups on H100. Preliminary numbers I'm seeing on H100 are: (input_len = output_len = batch_size = 128): Prefill improves to 92 ms from 178 ms. (input_len = 2048, output_len = 128, batch_size = 1): Prefill improves to 45 ms from 84 ms. For both cases above, decode improves to 3 ms from 8 ms. One "drawback" of this change is we can now only enable optimizations when the hidden_size is a multiple of 8 and the tensor pointers are aligned on a 16 byte boundary. But these conditions should be met essentially all the time. As for the changed semantics, I'll discuss it in the relevant review comment thread. Thanks! 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mawong-amd added 6 commits March 30, 2024 03:30 Layernorm optimizations: … aac1754 Bulk conversions (packed halfs into half2, using vectors of half2);
block and warp reduce with AMD wavesize 64 (vs 32);
using smaller block sizes for improved block occupancy on CUs
Use larger block sizes for decode; optimize warp and block reduce fully
Refactor vector to use half to maintain same alignment as c10::Half; move packed logic into member functions
Add a few missing unroll directives
Fix blockReduce stall caused by warp divergence on CUDA (vLLM uses universal masks)
Refactor vector type to enable optimizations for bf16
Re-apply the blockReduceSum fix for warp divergence
Hotfix: Disable BF16 opts due to ROCm 5.7 incompatibility
Remove redundant inline specifiers; preparing for upstream Disable no half conv flags for CUDA d2f681a Add more hidden sizes (including non-multiples of 8) to test 5128836 Enforce 16 byte alignment for CUDA vectorized mem ops c0e37f6 Add back explicit cast to T in reduction_utils 677e045 Style tweak a1bbdc4 mawong-amd force-pushed the layernorm2upstream branch
from 4f94b87 to a1bbdc4 Compare March 30, 2024 04:03 Copy link Contributor Author mawong-amd commented Mar 30, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Quick update on end-to-end runtime numbers. With the latest changes, I'm seeing small but observable improvements on H100. Specifically, on the latency benchmark (50 iters on each test): (input_len = output_len = batch_size = 128): Improves to 11.463s from 11.658s. [1.7% improvement] (input_len = 2048, output_len = 128, batch_size = 1): Improves to 4.261s from 4.362s. [2.3% improvement] 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mawong-amd requested a review
from WoosukKwon March 30, 2024 16:08 WoosukKwon added rocm Related to AMD ROCm and removed action-required labels Mar 30, 2024 WoosukKwon self-assigned this Mar 30, 2024 WoosukKwon approved these changes Mar 30, 2024 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @mawong-amd LGTM! Thanks for the optimization! Didn't know that RMSNorm can affect the performance this much. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon merged commit b6d1035 into vllm-project : main Mar 30, 2024 Copy link Member youkaichao commented Apr 1, 2024 I realized that this pr breaks cuda 11.8 support because of the usage of __half2 etc. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author mawong-amd commented Apr 1, 2024 I realized that this pr breaks cuda 11.8 support because of the usage of __half2 etc. I think we can hotfix in a define guard to enable these optimizations only when the cuda version is > 11.8. Let me prepare a diff that does that. 👍 1 youkaichao reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author mawong-amd commented Apr 1, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . EDIT: Hotfix created as the following PR #3782 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member youkaichao commented Apr 1, 2024 @mawong-amd Can you send a PR to land that patch? 🚀 1 mawong-amd reacted with rocket emoji All reactions 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mawong-amd mentioned this pull request Apr 1, 2024 [Hotfix][CI/Build][Kernel] CUDA 11.8 does not support layernorm optimizations #3782 Merged dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:12
|
3a243095e5e7b655b63ab08fbd5936cb40850415
|
https://github.com/vllm-project/vllm/pull/3623
| false | true | true | true |
PERF: improvement | SERVING: API server, OpenAI API server, Frontend | TEST: test, CI, continuous integration
|
Copy link Collaborator Yard1 commented Mar 25, 2024 Small tweak to CPU<->GPU comms in Sampler's _get_ranks (not a major improvement, just cleanup). PR Checklist (Click to Expand) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Optimize _get_ranks in Sampler c8f8eb7 Yard1 requested review from esmeetu , zhuohan123 and simon-mo March 25, 2024 21:28 njhill approved these changes Mar 25, 2024 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice! I didn't realize that you could do this particular kind of indexing with tensors. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon merged commit 3a24309 into vllm-project : main Mar 25, 2024 Yard1 deleted the optimize_get_ranks branch March 25, 2024 23:49 xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Mar 31, 2024 Optimize _get_ranks in Sampler ( vllm-project#3623 ) 19d7628 dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:15
|
bfdb1ba5c3fb14387c69acb1f5067102d8028e56
|
https://github.com/vllm-project/vllm/pull/3469
| false | true | true | true |
PERF: latency, latency, latency | SERVING: API server, OpenAI API server, Frontend | TEST: test, test, test
|
Copy link Collaborator Yard1 commented Mar 18, 2024 PR Checklist (Click to expand. Please read before submitting.) Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process. PR Title and Classification Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following: [Bugfix] for bug fixes. [CI/Build] for build or continuous integration improvements. [Doc] for documentation fixes and improvements. [Model] for adding a new model or improving an existing model. Model name should appear in the title. [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.) [Kernel] for changes affecting CUDA kernels or other compute kernels. [Core] for changes in the core vLLM logic (e.g., LLMEngine , AsyncLLMEngine , Scheduler , etc.) [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD] ). [Misc] for PRs that do not fit the above categories. Please use this sparingly. Note: If the PR spans more than one category, please include all relevant prefixes. Code Quality The PR need to meet the following code quality standards: We adhere to Google Python style guide and Google C++ style guide . Pass all linter checks. Please use format.sh to format your code. The code need to be well-documented to ensure future contributors can easily understand the code. Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests. Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes. Notes for Large Changes Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR. What to Expect for the Reviews The goal of the vLLM team is to be a transparent reviewing machine . We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability. After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team. After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. Thank You Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! This PR improved detokenization performance for the prefill step by doing the following: Avoiding the detokenization of the entire prompt when unnecessary Improving logprob token detokenization to avoid repeated computation Making prompt logprob detokenization incremental In order to facilitate testing, the detokenization logic is moved to its own abstraction. Benchmark results (BS=1, the gain will be linear depending on the number of input tokens in a batch) on a single A10 GPU, with 5 logprobs to decode: python /home/ray/default/vllm_public/benchmarks/benchmark_latency.py --model meta-llama/Llama-2-7b-chat-hf --batch-size 1 --output-len 2 --input-len 1000 --num-iters 1 Before PR: Avg latency: 0.292 seconds After PR: Avg latency: 0.287 seconds Benchmark results on a single A10 GPU, with 5 prompt logprobs to decode: python /home/ray/default/vllm_public/benchmarks/benchmark_latency.py --model meta-llama/Llama-2-7b-chat-hf --batch-size 1 --output-len 2 --input-len 1000 --num-iters 1 Before PR: Avg latency: 2.133 seconds After PR: Avg latency: 0.362 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 5 MeloYang05, robertgshaw2-redhat, esmeetu, WoosukKwon, and ywang96 reacted with thumbs up emoji 👀 4 njhill, robertgshaw2-redhat, WoosukKwon, and ywang96 reacted with eyes emoji All reactions 👍 5 reactions 👀 4 reactions Yard1 and others added 3 commits March 16, 2024 22:15 WIP 5b9153d WIP 8e37cfa Add co-author … ff9c9a5 Co-authored-by: MeloYang <[email protected]> Yard1 requested review from esmeetu , zhuohan123 and simon-mo March 18, 2024 17:21 Fix CI e4c2ebb Yard1 changed the title Improve detokenization performance for prefill [Core] Improve detokenization performance for prefill Mar 18, 2024 richardliaw assigned simon-mo Mar 18, 2024 Fix test 3171bbf Copy link Collaborator WoosukKwon commented Mar 22, 2024 @simon-mo Kindly reminder for this PR. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . simon-mo approved these changes Mar 22, 2024 View reviewed changes Copy link Collaborator simon-mo left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I'm stamping this but could useful for another eye on this. But since it's mostly moving things around + utilizing existing functions to achieve something, I think it's mergable. I tried my best to understand the code, left some comments for readability that please feel free to address. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/transformers_utils/detokenizer.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/transformers_utils/tokenizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/transformers_utils/detokenizer.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/transformers_utils/detokenizer.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/transformers_utils/detokenizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill reviewed Mar 22, 2024 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 this looks great, thanks. I agree with all @simon-mo 's comments. I like the Detokenizer class separation but wonder whether this could be taken a bit further: have Detokenizer be stateful and self-contained, it could contain the prefix_offset , read_offset and output_text fields that are currently in Sequence , and itself be a field of Sequence (for output tokens). A separate instance of it could be used for the prompt tokens. WDYT? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 commented Mar 22, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @njhill I like that idea, though we'd need to figure out a good design to share the tokenizer object across the instances. I think the default assumption may be that each Detokenizer has it's own HF tokenizer, but we'd like it to be shared. Maybe we could have something like DetokenizationState in the Sequence All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Review feedback c7e933f Copy link Member njhill commented Mar 22, 2024 I think the default assumption may be that each Detokenizer has it's own HF tokenizer, @Yard1 I'm not sure I follow why that would be the case or why it would matter? I don't see the problem with multiple Detokenizer instances referencing the same tokenizer? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author Yard1 commented Mar 22, 2024 @njhill Oh yeah from technical standpoint it's all clear, but from design standpoint multiple instances sharing the same object makes it hard to realize at a glance whether that object is shared or separate, which may lead to issues later ("As a new developer, I want to modify the tokenizer in this sequence for some reason, so I will just do that without realizing it's shared across all sequences") All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member njhill commented Mar 22, 2024 Ah, makes sense! though I think it's not uncommon for such things to be shared. I.e. the field is just seen as a pointer to the tokenizer used by this detokenizer. In any case maybe a comment on the field making clear that it's shared would help with that? And I don't mean to imply that this PR should necessarily be held up for this change, could always be done as a follow-on. 👍 1 Yard1 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author Yard1 commented Mar 22, 2024 @njhill definitely! I think it would be a good followup (even just breaking up some of the big sequence.py classes would be good) 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Fix test daa87e9 Yard1 merged commit bfdb1ba into vllm-project : main Mar 22, 2024 Yard1 deleted the improve_detokenization_for_prefill branch March 22, 2024 20:44 dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged gc-fu pushed a commit
to analytics-zoo/vllm
that referenced
this pull request Jul 2, 2024 [Core] Improve detokenization performance for prefill ( vllm-project#3469 … d60ae0f )
Co-authored-by: MeloYang <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:18
|
cf2f084d56a1293cb08da2393984cdc7685ac019
|
https://github.com/vllm-project/vllm/pull/3279
| false | true | false | true |
PERF: TTFT, TTFT, TTFT | TEST: test, test, test
|
Copy link Member tdoublep commented Mar 8, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . We have been benchmarking vLLM internally using a synthetic workload generator that has been fitted to mimic our production workloads. It stresses the inference server using a varying number of concurrent users, all users send requests that are drawn uniformly from a heterogeneous set of requests with different prompt lengths and number of generated tokens. We have found that for these workloads, vLLM has extremely low TTFT (time to first token) but has relatively high ITL (inter-token latency). An in-depth analysis seems to show that vLLM tends to schedule prompts as soon as possible, resulting in very small prompt batches, which are processed very quickly, but end up starving the decoding phase. This PR adds a new optional feature --scheduler-use-delay which, if enabled, creates an artificial delay before scheduling prompts. The delay is determined dynamically based on the time to perform the last prompt step. This delay allows the waiting queue to fill up with more requests. This gives the opportunity to make larger prompt batches, but due to the heterogeneous nature of the workload, we then hit issues related to padding overhead. It is thus beneficial to combine this scheduler delay with the --scheduler-policy=reorder feature from #2357 which sorts the waiting queue by sequence length. This allows us to create much larger prompt batches whilst staying with the padding limits, and leads to significant improvements in terms of ITL performance. This ITL improvement comes at the expense of TTFT performance, since (a) we are applying an artificial delay before scheduling prompts and (b) we are now processing larger batches which take longer to process. Different use-cases may have a preference towards either metric, which is why we feel this makes sense as an optional feature for now. Benchmarking results (labels on each point indicates the number of concurrent users): Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 njhill, ywang96, and zhuohan123 reacted with thumbs up emoji All reactions 👍 3 reactions jvlunteren and others added 2 commits March 7, 2024 18:35 Implement dynamic scheduler delay … 0d0d540 Co-authored-by: Thomas Parnell <[email protected]> SchedulerConfig: add default value for use_delay 75b7f57 Copy link Collaborator robertgshaw2-redhat commented Mar 8, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Take a look also at the chunked prefill efforts to address this #3106 👍 1 tdoublep reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author tdoublep commented Mar 8, 2024 @robertgshaw2-neuralmagic Thanks, and agreed: chunked prefill may eventually solve this problem in a different way. We hope that this relatively simple, optional, change can be used to improve performance in the meantime. 👍 4 robertgshaw2-redhat, mgoin, njhill, and ywang96 reacted with thumbs up emoji All reactions 👍 4 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member ywang96 commented Mar 8, 2024 This delay allows the waiting queue to fill up with more requests. This might affect #3168 and IMO it's worth thinking about how to integrate these control changes with each other All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Yard1 commented Mar 8, 2024 @tdoublep We were planning to upstream something similar, but instead of time we used number of decode iterations ("schedule prefill iteration only after N decode iterations have been completed or there are no running sequences"). We believe that this scheme is more generic and easier to implement. I'd be happy to make a PR early next week, if you are interested in trying that out. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member njhill commented Mar 8, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . @Yard1 could you elaborate on "more generic and easier to implement"? Isn't it completely generic and fairly trivial to implement in either case? We found the adaptive time-based approach to work very well, and it makes more sense to me intuitively at least. The goal is to prevent prefills from starving decode progress - the enforced delay is some fraction of the duration of the last prefill and so equivalent to saying that not more than say 50% of time can be spent in prefill. We chose this min delay to be half the last prefill time which ensures at most 66% of time is spent in prefill. Of course like in your case, the min delay only applies while there are still running sequences. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Yard1 commented Mar 8, 2024 Hmm I now see the delay is dynamic. I think thinking in terms of model iterations is simpler, but I suppose that this approach should be just as good. @tdoublep would it be possible for you to open source your benchmarking tool? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author tdoublep commented Mar 11, 2024 @Yard1 Yes - we do plan to open-source the benchmarking tool. We are working through that process internally at the moment. 👍 3 Yard1, ywang96, and richardliaw reacted with thumbs up emoji All reactions 👍 3 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor sh1ng commented Mar 11, 2024 @tdoublep Which value of --scheduler-use-delay combined with --scheduler_reorder_window do you use? I believe the sum of them must be a constant. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author tdoublep commented Mar 12, 2024 @sh1ng --scheduler-use-delay is a boolean option. If set to true, we apply a delay equal to half of the previous time for a prompt step (e.g., the delay is adaptive based on the workload). For the --scheduler_reorder_window we used a very large value (1000) to ensure that all of the requests in the waiting queue are sorted. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author tdoublep commented Mar 15, 2024 Based on the discussion here it sounds like sorting the requests in the waiting queue will no longer be necessary once we merge #3236 which effectively removing padding constraints via 1D query. We have run additional experiments to compare the performance when using 1D query from #3236 , as well as to evaluate the performance if we enable the dynamic delay (from this PR) in combination with 1D query: Conclusion : combining dynamic scheduler delay ( #3279 ) with 1D query ( #3236 ) is even more effective than combining it with sorting requests by length ( #2357 ). 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tdoublep added 4 commits March 20, 2024 15:37 Add test for scheduler_use_delay a7b6735 move use_delay test to end 8f15973 Merge branch 'main' into scheduler-delay 8ef047a code formatting fd1e5da Copy link Member Author tdoublep commented Mar 20, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Update: Added a test case in test_scheduler.py to cover use_delay option. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tdoublep mentioned this pull request Mar 20, 2024 [1/n][Chunked Prefill] Refactor input query shapes #3236 Merged Resolve some conflicts with changes on main 69cda2a Copy link Member Author tdoublep commented Mar 21, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Now that 1D query has been merged, the changes from this PR can be effective when applied on top of main branch. Here is latest round of benchmarking results. I've also included performance data collected using TGIS (our fork of TGI) as an additional reference point: Some conclusions here: We can see that introducing the scheduler delay dramatically improves the ITL when the inference server is under stress (>2x in some cases), and helps to close the performance gap to TGIS, which is better than vLLM in terms of ITL. The delay has the effect of processing larger batches of prompts, which worsens the TTFT a bit. However, we can see that the TTFT from vLLM after this change is still significantly better than TGIS (>10x in some cases). 👍 1 rkooo567 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 reviewed Mar 21, 2024 View reviewed changes vllm/core/scheduler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tdoublep added 3 commits March 21, 2024 18:59 Factor delay logic into separate function ae28c43 Merge branch 'main' into scheduler-delay 2d2b8e0 Remove print in test 99b0d7d Copy link Collaborator Yard1 commented Mar 21, 2024 Looks good. I think it would be even better if we didn't hardcode it to 0.5. I think we could make the argument a float, and if it is <=0, we don't apply the delay. 👍 2 tdoublep and njhill reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 reviewed Mar 21, 2024 View reviewed changes vllm/core/scheduler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tdoublep added 2 commits March 21, 2024 19:25 Add some comments e1e3408 Changed use_delay (bool) to delay_factor (float) a114e74 Copy link Member Author tdoublep commented Mar 21, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . Looks good. I think it would be even better if we didn't hardcode it to 0.5. I think we could make the argument a float, and if it is <=0, we don't apply the delay. @Yard1 Good idea - there is no reason to assume that 0.5 an optimum for all scenarios. I've updated the code accordingly. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator richardliaw commented Mar 22, 2024 @Yard1 are you approving this PR? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 approved these changes Mar 22, 2024 View reviewed changes Yard1 merged commit cf2f084 into vllm-project : main Mar 22, 2024 tdoublep deleted the scheduler-delay branch March 22, 2024 20:10 Copy link Member Author tdoublep commented Mar 22, 2024 @Yard1 thanks for the review and helpful discussion and suggestions. 🚀 1 Yard1 reacted with rocket emoji All reactions 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator rkooo567 commented Mar 22, 2024 @tdoublep Does vllm have a doc about configuration? Feel like it is worth adding it there if there is. I.e., there are config setttings to optimize throughput over latency, TTFT over ITL or the other way around. But it seems like things are not that well documented 👀 1 tdene reacted with eyes emoji All reactions 👀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author tdoublep commented Mar 25, 2024 @rkooo567 I agree it would be good to have documentation like that. The closest thing I can find the the developer documentation, e.g.: https://docs.vllm.ai/en/latest/dev/engine/llm_engine.html Perhaps we should consider adding some more pages there to documentation the ModelConfig , SchedulerConfig etc. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator rkooo567 commented Mar 25, 2024 I see. Yeah +1 we need better doc with configs, but it seems like there's no holistic page that explains this. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . dtrifiro mentioned this pull request May 15, 2024 bump ubi base image tag opendatahub-io/vllm#24 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:22
|
9474e89ba4ecae253b585eb6b3e1d85f4e108f01
|
https://github.com/vllm-project/vllm/pull/3357
| false | true | false | true |
PERF: Throughput, Throughput, Throughput | TEST: test
|
Copy link Contributor ElizaWszola commented Mar 12, 2024 • edited Loading Uh oh! There was an error while loading. Please reload this page . The performance of block allocator went down after implementing automatic prefix caching, even when running with prefix caching disabled. This pr brings back parts of the old code and regains some of the lost performance in the scenario with disabled prefix caching. Benchmarked with: python benchmark_throughput_cache.py --backend vllm --model huggyllama/llama-7b --dataset ../data/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 2000 Performance before introducing automatic prefix caching (commit baee28c ): Throughput: 10.37 requests/s, 5062.42 tokens/s Throughput: 10.46 requests/s, 5102.27 tokens/s Throughput: 10.47 requests/s, 5107.30 tokens/s Throughput: 10.48 requests/s, 5113.97 tokens/s Throughput: 10.53 requests/s, 5137.21 tokens/s Throughput: 10.54 requests/s, 5145.38 tokens/s Throughput: 10.56 requests/s, 5153.24 tokens/s Throughput: 10.57 requests/s, 5157.54 tokens/s Throughput: 10.63 requests/s, 5187.32 tokens/s Throughput: 10.65 requests/s, 5198.19 tokens/s Performance after introducing changes in this PR to commit ce4f5a2 : Throughput: 10.40 requests/s, 5076.05 tokens/s Throughput: 10.53 requests/s, 5137.97 tokens/s Throughput: 10.57 requests/s, 5156.04 tokens/s Throughput: 10.60 requests/s, 5173.07 tokens/s Throughput: 10.61 requests/s, 5177.02 tokens/s Throughput: 10.62 requests/s, 5179.91 tokens/s Throughput: 10.63 requests/s, 5186.06 tokens/s Throughput: 10.63 requests/s, 5186.63 tokens/s Throughput: 10.64 requests/s, 5193.72 tokens/s Throughput: 10.67 requests/s, 5207.76 tokens/s (OLD) Benchmark results (10 runs each): Performance before introducing automatic prefix caching (commit baee28c ): Throughput: 10.15 requests/s, 4909.50 tokens/s Throughput: 10.17 requests/s, 4918.22 tokens/s Throughput: 10.20 requests/s, 4936.93 tokens/s Throughput: 10.23 requests/s, 4949.76 tokens/s Throughput: 10.22 requests/s, 4945.64 tokens/s Throughput: 10.27 requests/s, 4967.08 tokens/s Throughput: 10.28 requests/s, 4971.52 tokens/s Throughput: 10.29 requests/s, 4980.92 tokens/s Throughput: 10.29 requests/s, 4976.94 tokens/s Throughput: 10.30 requests/s, 4982.69 tokens/s Performance after introducing automatic prefix caching (commit ce4f5a2 ): Throughput: 9.91 requests/s, 4795.14 tokens/s Throughput: 9.98 requests/s, 4830.01 tokens/s Throughput: 9.99 requests/s, 4832.00 tokens/s Throughput: 10.00 requests/s, 4839.62 tokens/s Throughput: 10.03 requests/s, 4851.13 tokens/s Throughput: 10.06 requests/s, 4868.87 tokens/s Throughput: 10.07 requests/s, 4873.87 tokens/s Throughput: 10.07 requests/s, 4872.51 tokens/s Throughput: 10.08 requests/s, 4876.18 tokens/s Throughput: 10.08 requests/s, 4877.26 tokens/s Performance after introducing changes in this PR to commit ce4f5a2 : Throughput: 10.07 requests/s, 4873.42 tokens/s Throughput: 10.17 requests/s, 4919.84 tokens/s Throughput: 10.18 requests/s, 4923.71 tokens/s Throughput: 10.18 requests/s, 4925.56 tokens/s Throughput: 10.19 requests/s, 4928.09 tokens/s Throughput: 10.20 requests/s, 4937.20 tokens/s Throughput: 10.21 requests/s, 4942.21 tokens/s Throughput: 10.21 requests/s, 4938.38 tokens/s Throughput: 10.21 requests/s, 4940.22 tokens/s Throughput: 10.22 requests/s, 4946.95 tokens/s Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 cadedaniel and rkooo567 reacted with thumbs up emoji All reactions 👍 2 reactions ElizaWszola added 8 commits March 6, 2024 13:10 Auto prefix performace fixes 2d2f5bb Small change to no-prefix-caching hashing 9468ce8 Pre-allocate token block list in no-cache scenario 83cd6ed Refactor block manager 4dd06e5 Clean up evictor, fix 20b7db8 Sage's feedback 690cc5e Merge branch 'upstream-main' into auto-prefix-perf 6e50143 format evictor 723e56b Copy link Member zhuohan123 commented Mar 12, 2024 cc @cadedaniel 👍 1 cadedaniel reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Fix tests fc9aebb cadedaniel reviewed Mar 13, 2024 View reviewed changes Copy link Collaborator cadedaniel left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the PR! I am concerned that our test coverage of the block manager is not sufficient to allow for refactors w/o good tests. There's a few branches in this PR that are only for prefix caching, which adds a lot of complexity. Could you comment on what causes the performance degradation / improvement? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions cadedaniel self-assigned this Mar 13, 2024 zhuohan123 self-assigned this Mar 14, 2024 zhuohan123 reviewed Mar 14, 2024 View reviewed changes Copy link Member zhuohan123 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Some random small comments. Will review in more detail! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager.py Outdated def free(self, block: PhysicalTokenBlock) -> None: pass @abstractproperty Copy link Member zhuohan123 Mar 13, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Should be abstract_method Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . ElizaWszola and others added 3 commits March 14, 2024 14:23 Update vllm/core/block_manager.py … c2f74ef Co-authored-by: Zhuohan Li <[email protected]> Update vllm/core/block_manager.py … 17ffc2d Co-authored-by: Zhuohan Li <[email protected]> Update vllm/core/block_manager.py … c383bac Co-authored-by: Zhuohan Li <[email protected]> Copy link Contributor Author ElizaWszola commented Mar 14, 2024 @cadedaniel I can think up some tests to add. Is there anything that you would like to be tested specifically? As for the performance gap that still exists, I'm not sure about it because the non-cached codepath is currently very similar to what had been there before the original auto prefix commit. I'm still poking around. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Feedback, one more small modification eaa1fb3 Copy link Contributor Author ElizaWszola commented Mar 14, 2024 Good news, I've found a small bug and redid some of the benchmarks: the performance looks similar to the old one, but I'd be happy if more people can verify. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Merge branch 'upstream-main' into auto-prefix-perf 29f9414 ElizaWszola mentioned this pull request Mar 15, 2024 [PREFIX CACHING FOLLOW UP] OrderedDict-based evictor #3431 Merged ElizaWszola changed the title A bunch of fixes to block allocator performance when automatic prefix caching is disabled [PREFIX CACHING FOLLOW UP] A bunch of fixes to block allocator performance when automatic prefix caching is disabled Mar 15, 2024 AllenDou reviewed Mar 18, 2024 View reviewed changes vllm/core/evictor.py if block.num_hashed_tokens == highest_num_hashed_tokens: if (block.last_accessed < evicted_block.last_accessed or block.last_accessed == evicted_block.last_accessed and block.num_hashed_tokens > evicted_block.num_hashed_tokens): Copy link Contributor AllenDou Mar 18, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I have also optimized the evictor LRU, but after learning more about evictors, I feel that LRU is unnecessary as it is not as efficient as the random policy. So, in my opinion, LRU policy should be removed. cc @cadedaniel Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author ElizaWszola Mar 18, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The changes in this PR improve LRU evictor efficiency marginally. I'm ok with removing them from this PR, especially when a better way to improve LRU evictor efficiency (bringing it to the level roughly on par with random evictor for the tested cases) is implemented here: #3431 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions zhuohan123 approved these changes Mar 19, 2024 View reviewed changes Copy link Member zhuohan123 left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Thanks for the fix and left some small comments. Regarding @cadedaniel 's comment on tests, let's discuss more offline together and figure out what tests we need to write. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager.py else: # Set the reference counts of the token blocks. block.ref_count = seq_group.num_seqs() elif self.enable_caching: Copy link Member zhuohan123 Mar 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Does prefix caching work with sliding window now? Should we explicitly check somewhere that if we enable caching, sliding window should not be enabled. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author ElizaWszola Mar 19, 2024 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The prefix caching functionality is simply not used when we have sliding windows. We have specific checks for that in different places in the code. Putting it in a more central place sounds like a better idea though, and less confusing. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/core/block_manager.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . zhuohan123 added
the action-required label Mar 19, 2024 ElizaWszola and others added 4 commits March 19, 2024 13:06 Update vllm/core/block_manager.py … 65b8213 Co-authored-by: Zhuohan Li <[email protected]> Update vllm/core/block_manager.py … 1fc91bb Co-authored-by: Zhuohan Li <[email protected]> Update vllm/core/block_manager.py … e39ae06 Co-authored-by: Zhuohan Li <[email protected]> Update vllm/core/block_manager.py … af1285f Co-authored-by: Zhuohan Li <[email protected]> ElizaWszola added 2 commits March 19, 2024 08:38 format, disallow sliding window with prefix caching 6c96014 Merge branch 'upstream-main' into auto-prefix-perf c4b69ab Copy link Member zhuohan123 commented Mar 19, 2024 @ElizaWszola Please let me know when this PR is ready to be merged! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . zhuohan123 approved these changes Mar 20, 2024 View reviewed changes Copy link Member zhuohan123 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Thanks for the fix! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions zhuohan123 enabled auto-merge (squash) March 20, 2024 07:11 zhuohan123 disabled auto-merge March 20, 2024 07:11 zhuohan123 merged commit 9474e89 into vllm-project : main Mar 20, 2024 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:25
|
21d93c140d0a97af5f0c59e660cf04bd417fd424
|
https://github.com/vllm-project/vllm/pull/2090
| true | true | false | false |
LM_EVAL: MMLU, Humaneval | PERF: throughput, throughput, throughput
|
Copy link Collaborator Yard1 commented Dec 13, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . This PR implements a more efficient parallelism scheme for the Mixtral model. Instead of sharding the layers of each expert by rank, we instead shard whole experts across ranks. This gives us several benefits: We reduce the amount of communication between ranks We do not require megablocks (meaning we can now support non-CUDA accelerators) The operations become more efficient and CUDA-graphable. In the new design, each expert will conduct a dense matrix multiplication of the whole batch, and then rows not assigned to the expert will be zeroed out before accumulation. This results in a slight inefficiency for tensor parallel sizes below the number of experts - it means that we will essentially always do the upper performance bound computation. However, we have not found this to be an issue in practice. A potential improvement would be to use a sparse/grouped GEMM kernel (at least for prefill - for decode it shouldn't matter). We have benchmarked this change and found that it lowers the e2e latency for Mixtral by 4x-5x on A100-40GB TP8 compared to the previous implementation. Furthermore, the PR refactors the Mixtral model for compatibility with Hugging Face format and safetensor weights, and adds quantization support. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 19 WoosukKwon, pcmoritz, scv119, theycallmeloki, kevinhu, esmeetu, binarycrayon, liangfu, RobPruzan, JCRPaquin, and 9 more reacted with hooray emoji 👀 10 luiscape, nateraw, pcmoritz, kevinhu, liangfu, 152334H, RobPruzan, pierrestock, L1aoXingyu, and bernaferrari reacted with eyes emoji All reactions 🎉 19 reactions 👀 10 reactions Yard1 added 3 commits December 13, 2023 14:17 Cleanup a6267bd Revert "Update Dockerfile to build Megablocks ( vllm-project#2042 )" … 804bccb This reverts commit 3fefe27 . Revert "Update Dockerfile to support Mixtral ( vllm-project#2027 )" … d96ba1c This reverts commit eb17212 . Yard1 requested review from zhuohan123 , simon-mo and WoosukKwon December 13, 2023 22:23 This was referenced Dec 13, 2023 Mixtral tokens-per-second slower than expected, 10 tps #2069 Closed Support Mixtral's safetensors weights #2041 Closed WoosukKwon linked an issue Dec 13, 2023 that may be closed by this pull request Support Mixtral's safetensors weights #2041 Closed Copy link Collaborator WoosukKwon commented Dec 13, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . Hi @Yard1 , thanks for the amazing work! I've just tested the PR on examples/llm_engine_example.py and got the following results: Current main INFO 12-13 23:31:55 llm_engine.py:222] # GPU blocks: 86172, # CPU blocks: 8192
INFO 12-13 23:31:58 llm_engine.py:649] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
RequestOutput(request_id=0, prompt='A robot may not injure a human being', prompt_token_ids=[1, 330, 18401, 993, 459, 5891, 482, 264, 2930, 1250], prompt_logprobs=[None, {330: -9.912246704101562, 22478: -0.7872462272644043}, {18401: -8.597543716430664, 633: -3.347543478012085}, {993: -4.565238952636719, 369: -2.1902387142181396}, {459: -0.4373227059841156}, {5891: -0.4258776903152466}, {482: -3.099436753473128e-06}, {264: -0.0011317284079268575}, {2930: -0.0006484074983745813}, {1250: -0.009901456534862518}], outputs=[CompletionOutput(index=0, text=' or, through inaction, allow a human being to come to harm.\n', token_ids=[442, 28725, 1059, 297, 1774, 28725, 1914, 264, 2930, 1250, 298, 1567, 298, 6241, 28723, 13], cumulative_logprob=-0.6244106972517329, logprobs=[{442: -0.017248855903744698}, {28725: -0.002303091809153557}, {1059: -0.0011830481234937906}, {297: -0.00041952868923544884}, {1774: -7.164221460698172e-05}, {28725: -0.0003152588615193963}, {1914: -0.0006347072194330394}, {264: -0.0005576247931458056}, {2930: -0.00010775939153973013}, {1250: -0.0015303102554753423}, {298: -0.0005830018781125546}, {1567: -0.0004058252670802176}, {298: -0.0002112165529979393}, {6241: -0.0003516055876389146}, {28723: -0.03660520166158676}, {13: -0.5618820190429688}], finish_reason=length)], finished=True)
RequestOutput(request_id=1, prompt='To be or not to be,', prompt_token_ids=[1, 1791, 347, 442, 459, 298, 347, 28725], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text=' that is the question.\nWhether ’tis nobler in the mind', token_ids=[369, 349, 272, 2996, 28723, 13, 23842, 620, 24978, 28707, 278, 7169, 1523, 297, 272, 2273], cumulative_logprob=-5.713744854774632, logprobs=None, finish_reason=length)], finished=True)
RequestOutput(request_id=2, prompt='What is the meaning of life?', prompt_token_ids=[1, 1824, 349, 272, 5746, 302, 1411, 28804], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='\n\nThe meaning of life is the question of the purpose and significance of life', token_ids=[13, 13, 1014, 5746, 302, 1411, 349, 272, 2996, 302, 272, 6032, 304, 18309, 302, 1411], cumulative_logprob=-8.794605396687984, logprobs=None, finish_reason=length), CompletionOutput(index=3, text=' It’s a question that’s been asked by philosophers, theolog', token_ids=[661, 28809, 28713, 264, 2996, 369, 28809, 28713, 750, 2261, 486, 8829, 404, 28725, 272, 1165], cumulative_logprob=-9.33446236141026, logprobs=None, finish_reason=length)], finished=True)
RequestOutput(request_id=3, prompt='It is only with the heart that one can see rightly', prompt_token_ids=[1, 661, 349, 865, 395, 272, 3031, 369, 624, 541, 1032, 1103, 346], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='; what is essential is invisible to the eye.\n\nAntoine de Saint', token_ids=[28745, 767, 349, 7974, 349, 20187, 298, 272, 5421, 28723, 13, 13, 13389, 21265, 340, 6393], cumulative_logprob=-2.537341303512221, logprobs=None, finish_reason=length), CompletionOutput(index=1, text='; what is essential is invisible to the eye. Antoine de Saint-Ex', token_ids=[28745, 767, 349, 7974, 349, 20187, 298, 272, 5421, 28723, 3821, 21265, 340, 6393, 28733, 966], cumulative_logprob=-2.979412608925486, logprobs=None, finish_reason=length), CompletionOutput(index=2, text='; what is essential is invisible to the eye. – Antoine de Saint-', token_ids=[28745, 767, 349, 7974, 349, 20187, 298, 272, 5421, 28723, 764, 3821, 21265, 340, 6393, 28733], cumulative_logprob=-3.1470024501613807, logprobs=None, finish_reason=length)], finished=True) This PR INFO 12-13 23:20:14 llm_engine.py:222] # GPU blocks: 57756, # CPU blocks: 8192
INFO 12-13 23:20:17 llm_engine.py:649] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
RequestOutput(request_id=0, prompt='A robot may not injure a human being', prompt_token_ids=[1, 330, 18401, 993, 459, 5891, 482, 264, 2930, 1250], prompt_logprobs=[None, {330: -9.617840766906738, 12: -1.7154966592788696}, {18401: -8.787067413330078, 330: -2.7558176517486572}, {993: -4.204432010650635, 349: -2.0169320106506348}, {459: -0.3415136933326721}, {5891: -1.0073399543762207, 6241: -0.5073400139808655}, {482: -1.3708974620385561e-05}, {264: -0.1135331317782402}, {2930: -0.002309514442458749}, {1250: -0.016736455261707306}], outputs=[CompletionOutput(index=0, text=', or, more importantly, a robot may not kill a human being.\n', token_ids=[28725, 442, 28725, 680, 21485, 28725, 264, 18401, 993, 459, 4015, 264, 2930, 1250, 28723, 13], cumulative_logprob=-7.275053498335183, logprobs=[{28725: -0.16343587636947632}, {442: -0.21259483695030212}, {28725: -0.1041431725025177}, {680: -1.0776935815811157}, {21485: -0.2229764610528946}, {28725: -0.01339601818472147}, {264: -1.1102567911148071}, {18401: -0.1942392736673355}, {993: -0.3014945983886719}, {459: -0.05710757523775101}, {4015: -1.3823846578598022}, {264: -0.5338531732559204}, {2930: -0.08587013930082321}, {1250: -0.040455106645822525}, {28723: -0.33675137162208557}, {13: -1.4384008646011353}], finish_reason=length)], finished=True)
RequestOutput(request_id=1, prompt='To be or not to be,', prompt_token_ids=[1, 1791, 347, 442, 459, 298, 347, 28725], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text=' that is the question of every person’s life.\nTo live, to', token_ids=[369, 349, 272, 2996, 302, 1012, 1338, 28809, 28713, 1411, 28723, 13, 1551, 2943, 28725, 298], cumulative_logprob=-19.226570382204045, logprobs=None, finish_reason=length)], finished=True)
RequestOutput(request_id=2, prompt='What is the meaning of life?', prompt_token_ids=[1, 1824, 349, 272, 5746, 302, 1411, 28804], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='\n\nThe meaning of life is the meaning of one’s life. That', token_ids=[13, 13, 1014, 5746, 302, 1411, 349, 272, 5746, 302, 624, 28809, 28713, 1411, 28723, 1725], cumulative_logprob=-16.94498591311276, logprobs=None, finish_reason=length), CompletionOutput(index=4, text='\n\nThis question was often asked in the ancient and modern days.\n\n', token_ids=[13, 13, 3260, 2996, 403, 2608, 2261, 297, 272, 9467, 304, 4638, 2202, 28723, 13, 13], cumulative_logprob=-28.903032392263412, logprobs=None, finish_reason=length)], finished=True)
RequestOutput(request_id=3, prompt='It is only with the heart that one can see rightly', prompt_token_ids=[1, 661, 349, 865, 395, 272, 3031, 369, 624, 541, 1032, 1103, 346], prompt_logprobs=None, outputs=[CompletionOutput(index=1, text='; the\nreasonable world does not know in unces.\n\nHow', token_ids=[28745, 272, 13, 14991, 522, 1526, 1235, 459, 873, 297, 521, 1377, 28723, 13, 13, 5660], cumulative_logprob=-11.229475471191108, logprobs=None, finish_reason=length), CompletionOutput(index=0, text='; the\nreasonable world does not know in unces, what the\n', token_ids=[28745, 272, 13, 14991, 522, 1526, 1235, 459, 873, 297, 521, 1377, 28725, 767, 272, 13], cumulative_logprob=-11.59051242750138, logprobs=None, finish_reason=length), CompletionOutput(index=2, text='; the\nreasonable world does not know in unces.\n\nFor', token_ids=[28745, 272, 13, 14991, 522, 1526, 1235, 459, 873, 297, 521, 1377, 28723, 13, 13, 2565], cumulative_logprob=-11.729475471191108, logprobs=None, finish_reason=length)], finished=True) In summary, 1) the results do not match; I feel the current main's output looks more correct, and 2) There's a huge decrease in allocated the KV cache size. Does this mean that this implementation has very high memory overhead? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author Yard1 commented Dec 13, 2023 Thanks, let me check! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author Yard1 commented Dec 13, 2023 FYI we have ran MMLU and recieved extremely close results for both implementations. I feel like the divergence may be due to floating point operations, but I will see if it's possible to reduce it. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member esmeetu commented Dec 14, 2023 Hi @Yard1 ,which model do you use? I tried this PR with https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 and it doesn't work. It will throw KeyError: 'tok_embeddings.weight'. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member esmeetu commented Dec 14, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . Hi @Yard1 ,which model do you use? I tried this PR with https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 and it doesn't work. It will throw KeyError: 'tok_embeddings.weight'. I found that i only download .pt weights without .safetensors. Doesn't this PR support .pt format? And Do you know how to convert pt to safetensors without download again? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author Yard1 commented Dec 14, 2023 @esmeetu There is a divergence between pt and safetensors weights uploaded to huggingface hub (they use different layer names). You can use this script to convert pt to safetensors - https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Alternative approach aee7762 Copy link Collaborator Author Yard1 commented Dec 14, 2023 @WoosukKwon I have updated the PR using an alternative approach that should both reduce memory usage and numerical inaccuracies. PTAL! 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yard1 added 2 commits December 13, 2023 19:25 Tweak 14f0d67 Go back to dense 02d2c04 Copy link Member esmeetu commented Dec 14, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . @Yard1 Thanks. I converted .pt format weights to .bin format weights. And this PR gives me x2 speedup(6t/s -> 12t/s). Thanks for your great work! Besides i compared Humaneval score on that model. And the result(50.6) is better than main branch(49.4). Another thing, i found the GPU utilization ratio is about 80% when model running. It seems that there is more space to improve performance. 👍 4 pcmoritz, Yard1, WoosukKwon, and theycallmeloki reacted with thumbs up emoji All reactions 👍 4 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Cleanup 00ad1b7 nivibilla mentioned this pull request Dec 14, 2023 Timeline on supporting Mixtral on ROCm? #2089 Closed Copy link Collaborator WoosukKwon commented Dec 14, 2023 @Yard1 The outputs after the fix look good to me! Many thanks for the quick fix! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon self-assigned this Dec 14, 2023 WoosukKwon reviewed Dec 14, 2023 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 Thanks for submitting the PR! The code looks really great overall. I'm just wondering why we need DummyModule . Please check out my comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 nittaya111 reacted with heart emoji All reactions ❤️ 1 reaction vllm/config.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Dockerfile Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/models/mixtral.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/models/mixtral.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/models/mixtral.py config.hidden_size, config.intermediate_size, linear_method=linear_method) if idx in self.expert_indicies else DummyModule() Copy link Collaborator WoosukKwon Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I actually didn't understand why we need DummyModule here. Could you elaborate more on this? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The whole purpose of the dummy module is so that we can discard weights for experts we do not want to load on a given rank. If you have a better way of doing that, please let me know! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Sorry, why can't we just use None ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Isn't that going to cause exceptions during weights loading? If not then we should definitely use None Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor liangfu Dec 14, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Instead of adding a placeholder as DummpyModule , construct self.experts as for a list of experts in local_rank? For instance, with num_local_experts=8, tp_size=4, expert_indicies=[0,1], construct self.experts with first two experts and make the ModuleList short? Since gating network is replicated, getting access to routing_weights locally in each rank should be easy, right? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It's moreso about how to make this compatible with vLLM's TP weight loading logic, which uses list indices Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator WoosukKwon Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Let's merge the PR for the release and fix this issue in another PR. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon mentioned this pull request Dec 14, 2023 Bump up to v0.2.5 #2095 Merged liangfu reviewed Dec 14, 2023 View reviewed changes vllm/model_executor/models/mixtral.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Yard1 added 3 commits December 13, 2023 21:54 Remove fschat 90a9fb0 Fix top_k 1b744e2 ROCM a5c7da4 WoosukKwon added 2 commits December 14, 2023 07:47 Warning for pt weights ea91f03 Fix ROCm supported model doc 39aaf15 WoosukKwon approved these changes Dec 14, 2023 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Many thanks for the great work! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 nittaya111 reacted with heart emoji 🚀 2 Yard1 and ArthurZucker reacted with rocket emoji All reactions ❤️ 1 reaction 🚀 2 reactions Copy link Collaborator WoosukKwon commented Dec 14, 2023 @liangfu Thanks for the review! ❤️ 2 Yard1 and nittaya111 reacted with heart emoji All reactions ❤️ 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon merged commit 21d93c1 into vllm-project : main Dec 14, 2023 Yard1 deleted the mixtral_expert_parallelism branch December 14, 2023 08:01 This was referenced Dec 14, 2023 performance of Mixtral-8x7B inference #2098 Closed Refactor Mixtral to reuse code from MegaBlocks #2032 Closed tgale96 reviewed Dec 14, 2023 View reviewed changes vllm/model_executor/models/mixtral.py else: final_hidden_states.add_(current_hidden_states) return tensor_model_parallel_all_reduce(final_hidden_states).view( Copy link tgale96 Dec 14, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Hi! I'm curious to understand what's going on in this implementation. The PR calls this expert parallelism but it still looks like tensor parallelism to me? At least, if this is expert parallelism, I don't see any logic routing the tokens to the device that owns the expert it was assigned to? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 Dec 14, 2023 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment We run every expert on the rank and zero out the rows that were not selected to be used by the expert. We then all reduce the tensors across the ranks. This results in dense computations (and higher memory usage), but it dramatically reduces latency, especially for small batch sizes. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link tgale96 Dec 15, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Ah ok, thank you! I was confused by the "expert parallelism" in the PR name, which I think is a misnomer here :) The prior implementation with MegaBlocks was using a training-optimized code path. I'd expect it to be very inefficient because a) it pads each expert batch to the nearest multiple of 128 and b) dispatches to sparse matmul kernels which use tile dimensions tuned for large problems. For inference, its much better to use our grouped implementation, which avoids these pitfalls. Essentially what is in the function here . Our gather/scatter kernels handle replication for top_k>1 as well as the permutation to group tokens by expert assignment. They're also written in Triton so they should work fine on AMD. For the MLP, we dispatch to custom grouped GEMM ops, but you can also use a pure-Torch grouped MLP like what's happening in this PR to make it AMD compatible. This is the direction I'd go to improve the current implementation further, fwiw. You don't necessarily need to add MegaBlocks as a dep - most of this can be replicated without too much complexity. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author Yard1 Dec 15, 2023 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the explanation! I definitely agree there is a lot of room to expand here. Looking forward to more contributions from you or the community! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link tom-doerr commented Dec 14, 2023 Of we run all experts anyway, how about using more of the results? https://www.reddit.com/r/LocalLLaMA/comments/18i2h4c/mixtral_gets_even_better_by_just_adding_an_expert/ 👍 1 NickLucche reacted with thumbs up emoji ❤️ 1 nittaya111 reacted with heart emoji All reactions 👍 1 reaction ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link vibhuagrawal14 commented Dec 14, 2023 For me, the speed has increased from 11 tok/s to 30+ 🚀 🎉 4 scv119, pcmoritz, tom-doerr, and TissueC reacted with hooray emoji ❤️ 1 nittaya111 reacted with heart emoji All reactions 🎉 4 reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 0xymoro mentioned this pull request Dec 15, 2023 Mixtral optimization from vllm NVIDIA/TensorRT-LLM#672 Closed xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Dec 18, 2023 Optimize Mixtral with expert parallelism ( vllm-project#2090 ) f49edbe timohear mentioned this pull request Feb 1, 2024 Mixtral nf4 performance 2x slower than expected huggingface/text-generation-inference#1501 Closed 4 tasks hongxiayang pushed a commit
to hongxiayang/vllm
that referenced
this pull request Feb 13, 2024 Optimize Mixtral with expert parallelism ( vllm-project#2090 ) bc7486b Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:30
|
ec3b5ce9ccb4262194a16a8b1c31ffd6b3b824b9
|
https://github.com/vllm-project/vllm/pull/1338
| false | false | false | false |
NONE
|
Copy link Collaborator Yard1 commented Oct 13, 2023 Two main changes: if we are using a fast tokenizer, we do not enter the slow _convert_tokens_to_string_with_added_encoders loop as the fast tokenizers do not use it in base transformers Use cached properties for added_tokens_encoder and all_special_tokens . Those 2 changes improved detokenization speed for 4096 tokens from 13ms to 2ms. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 WoosukKwon reacted with heart emoji All reactions ❤️ 1 reaction Improve detokenization performance 09e8491 Copy link Collaborator Author Yard1 commented Oct 13, 2023 cc @WoosukKwon @zhuohan123 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon self-requested a review October 13, 2023 16:43 WoosukKwon approved these changes Oct 13, 2023 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @Yard1 LGTM! Thanks for the contribution! This resolves the performance degradation after upgrading tokenizers. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/transformers_utils/tokenizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . WoosukKwon merged commit ec3b5ce into vllm-project : main Oct 13, 2023 Yard1 deleted the use_fast_tokenizer branch October 13, 2023 17:09 hongxiayang pushed a commit
to hongxiayang/vllm
that referenced
this pull request Feb 13, 2024 Improve detokenization performance ( vllm-project#1338 ) 69ae127 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:32
|
c45f3c3ab60f4bf4eaab791a76028b8b07ffe9bd
|
https://github.com/vllm-project/vllm/pull/17
| false | true | false | true |
PERF: latency, latency, Profile | TEST: Test, CI
|
Copy link Member zhuohan123 commented Mar 31, 2023 Speed before this PR: ubuntu@ray-zhuohan-cf-head-d95da8d2-compute:~/nfs/cacheflow/cacheflow$ python benchmark/benchmark_latency.py --model facebook/opt-13b
Namespace(batch_size=8, block_size=8, dtype='half', input_len=32, max_batch_size=2560, model='facebook/opt-13b', model_path='~/.cacheflow/model_weights', output_len=128, pipeline_parallel_size=1, seed=0, swap_space=20, tens
or_parallel_size=1)
2023-03-31 14:17:41,580 INFO worker.py:1535 -- Started a local Ray instance. View the dashboard at http://127.0.0.1:8266
# GPU blocks: 1975, # CPU blocks: 3276
Warm up step
Profile step: 100%|██████████████████████████████████████████████████████████████| 3/3 [00:15<00:00, 5.18s/it]
Avg latency: 5.184098243713379 seconds Speed after this PR: ubuntu@ray-zhuohan-cf-head-d95da8d2-compute:~/nfs/cacheflow/cacheflow$ python benchmark/benchmark_latency.py --model facebook/opt-13b
Namespace(batch_size=8, block_size=8, dtype='half', input_len=32, max_batch_size=2560, model='facebook/opt-13b', model_path='~/.cacheflow/model_weights', output_len=128, pipeline_parallel_size=1, seed=0, swap_space=20, tensor_parallel_size=1)
2023-03-31 15:20:04,885 INFO worker.py:1535 -- Started a local Ray instance. View the dashboard at http://127.0.0.1:8266
# GPU blocks: 1975, # CPU blocks: 3276
Warm up step
Profile step: 100%|██████████████████████████████████████████████████████████████| 3/3 [00:10<00:00, 3.49s/it]
Avg latency: 3.492198626200358 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions zhuohan123 added 2 commits March 31, 2023 15:25 Optimize tensor parallel execution speed a32f244 add more files c3e6bce zhuohan123 requested a review
from WoosukKwon March 31, 2023 15:32 WoosukKwon approved these changes Mar 31, 2023 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Awesome! Thanks for the effort. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon reviewed Mar 31, 2023 View reviewed changes benchmark/benchmark_latency.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . nit 2bea93e zhuohan123 merged commit c45f3c3 into main Mar 31, 2023 zhuohan123 deleted the optimize-tp-speed branch June 18, 2023 07:22 shanshanpt mentioned this pull request Nov 17, 2023 Run long conetxt error : CUDA error: an illegal memory access was encountered #1700 Closed junior-zsy mentioned this pull request Nov 20, 2023 Error with 32k Long Text in chatglm2-6b-32k Model #1725 Closed hongxiayang pushed a commit
to hongxiayang/vllm
that referenced
this pull request Feb 13, 2024 Optimize tensor parallel execution speed ( vllm-project#17 ) ad3d36f AdrianAbeyta referenced
this pull request
in ROCm/vllm Mar 8, 2024 Merge pull request #17 from ROCm/IFU-2024-03-01-fp8-kv … b3d81e0 Rebase fp8_kv branch with upstream (3-07-2024) z103cb referenced
this pull request
in z103cb/opendatahub_vllm Apr 22, 2024 Compile kernels and fix build ( opendatahub-io#17 ) … 15076fa These Dockerfile changes:
- Update the release stage to work with the recently refactored
`requirements-common.txt` / `requirements-cuda.txt` split
- Fixup the kernel compilation in the `build` stage to correctly pick up
cuda
- Install the kernels from this docker build rather than pulling a
precompiled wheel. We can swap that back once a new wheel is available
with the correct pytorch version + updated interfaces
---------
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Joe Runde <[email protected]> fxmarty pushed a commit
to fxmarty/vllm-public
that referenced
this pull request May 31, 2024 Merge pull request vllm-project#17 from ROCm/triton-config-fix … bebcbe6 [ROCm] adding a missing triton autotune config alixiaodi mentioned this pull request Aug 2, 2024 [Bug]: #7072 Closed SpaceHunterInf mentioned this pull request Sep 30, 2024 [Bug]: Bus error (core dumped) #8974 Closed 1 task wuhuikx pushed a commit
to wuhuikx/vllm
that referenced
this pull request Mar 27, 2025 [Platform] add dispatch key ( vllm-project#17 ) … dd425d6 ### What this PR does / why we need it?
Add dispatch key for NPU, so that the log could be print correctly.
Now
```
executor_base.py:110] # CPU blocks: 220478, # CPU blocks: 21845
```
After this pr
```
executor_base.py:110] # NPU blocks: 220478, # CPU blocks: 21845
```
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
CI passed and log printed as above
Signed-off-by: MengqingCao <[email protected]> hao-cold mentioned this pull request May 13, 2025 [Bug]: CUDA error: an illegal instruction was encountered #18045 Open 1 task markmc mentioned this pull request May 21, 2025 [Bug][Failing Test]: Distributed Comm Ops - distributed/test_shm_broadcast.py #18492 Closed 1 task zerosurplus mentioned this pull request Jun 16, 2025 [Bug]: torch.distributed.DistNetworkError: The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 46229). #19670 Open 1 task robertgshaw2-redhat added a commit
that referenced
this pull request Jul 7, 2025 Merge pull request #17 from praveingk/batching … 39e6bd1 Load balance across multiple workers xiaomofang mentioned this pull request Jul 31, 2025 [Bug]: There is an issue with speculative inference in Eagle mode, where the context length of vLLM inference is constrained by the draft model. #21986 Open 1 task zyongye pushed a commit
to zyongye/vllm
that referenced
this pull request Aug 5, 2025 Add TRT-LLM Attention Sink and MXFP4 MoE ( vllm-project#17 ) 78e69f6 zyongye pushed a commit
to zyongye/vllm
that referenced
this pull request Aug 6, 2025 Add TRT-LLM Attention Sink and MXFP4 MoE ( vllm-project#17 ) 2cc41a7 JeffreyWong20 mentioned this pull request Aug 19, 2025 [Bug]: [TPU] profiling_tpu/profiling.py example crashed when runs on vllm_tpu docker #23194 Closed 1 task Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:36
|
b690e34824fd5a5c4054a0c0468ebfb6aa1dd215
|
https://github.com/vllm-project/vllm/pull/21075
| true | true | true | true |
LM_EVAL: lm_eval, lm_eval, lm_eval | PERF: throughput, throughput, throughput | SERVING: vllm serve, Serving, Serving | TEST: test, test, test
|
Copy link Contributor cyang49 commented Jul 16, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose This PR uses preallocated output tensor for SSM output both from decode and prefill paths, instead of allocating individual tensors and then concatenating with torch.vstack . We observed that the original approach causes unnecessary D2D copy. Test Plan Testing with benchmark_serving.py and observe the throughput change. Ideally a slight improvement should be observed Testing with lm_eval to make sure output is still correct Test Result Experiments were done on single H100-80GB. benchmark_serving.py # server
vllm serve ibm-ai-platform/Bamba-9B-v2 --port 9998 # client
python benchmarks/benchmark_serving.py --model ibm-ai-platform/Bamba-9B-v2 --backend vllm --dataset-name sharegpt --dataset-path /net/storage149/mnt/md0/ccyang/github.com/ShareGPT_V3/ShareGPT_V3_unfiltered_cleaned_split.json --ignore-eos --port 9998 Before (#1c3198b) ============ Serving Benchmark Result ============
Successful requests: 983
Benchmark duration (s): 44.69
Total input tokens: 209731
Total generated tokens: 195084
Request throughput (req/s): 22.00
Output token throughput (tok/s): 4365.18
Total Token throughput (tok/s): 9058.10 After ============ Serving Benchmark Result ============
Successful requests: 983
Benchmark duration (s): 44.01
Total input tokens: 209731
Total generated tokens: 195084
Request throughput (req/s): 22.34
Output token throughput (tok/s): 4432.88
Total Token throughput (tok/s): 9198.58 No performance degradation. lm_eval # Command
lm_eval --model vllm --model_args pretrained=ibm-ai-platform/Bamba-9B-v2,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95 --batch_size auto --trust_remote_code --cache_requests true --tasks gsm8k Before (#1c3198b) vllm (pretrained=ibm-ai-platform/Bamba-9B-v2,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.4162|± |0.0136|
| | |strict-match | 5|exact_match|↑ |0.4132|± |0.0136| After vllm (pretrained=ibm-ai-platform/Bamba-9B-v2,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.4162|± |0.0136|
| | |strict-match | 5|exact_match|↑ |0.4132|± |0.0136| (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Jul 16, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jul 16, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces a performance optimization by pre-allocating the SSM output tensor, which avoids an unnecessary device-to-device copy. The approach is sound and the changes are well-contained. I've identified one critical issue related to tensor sharding that would cause an assertion failure when using tensor parallelism. Addressing this should make the implementation robust. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/mamba/mamba_mixer2.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . cyang49 marked this pull request as ready for review July 16, 2025 20:19 cyang49 changed the title [Model] preallocate SSM output tensor to avoid d2d copy overhead [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhead Jul 16, 2025 Copy link Member DarkLight1337 commented Jul 17, 2025 cc @tlrmchlsmth @tdoublep All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Jul 21, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @cyang49 . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 21, 2025 cyang49 force-pushed the pr_mamba2_vstack branch
from f9ab16e to 5f73b79 Compare July 21, 2025 14:51 mergify bot removed
the needs-rebase label Jul 21, 2025 cyang49 force-pushed the pr_mamba2_vstack branch
4 times, most recently
from 875c81f to 3873218 Compare July 23, 2025 15:09 tlrmchlsmth reviewed Jul 30, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This looks like a reasonable optimization. My main comment is that this leaves the interface to the mamba_ssm functions more complicated than they were before. Now they support both in-place updating and out-of-place allocation of the outputs. And we need to handle those two cases in a few different places. Could we change it to always be in-place instead? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author cyang49 commented Jul 30, 2025 This looks like a reasonable optimization. My main comment is that this leaves the interface to the mamba_ssm functions more complicated than they were before. Now they support both in-place updating and out-of-place allocation of the outputs. And we need to handle those two cases in a few different places. Could we change it to always be in-place instead? I think I kept the original logic as a fall back, but you're right, we can remove them. I will push a simplified version if it is safe to remove. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Jul 30, 2025 @tlrmchlsmth There are two other uses in plamo2.py and phi4flash.py If I make the kernel only support in-place update, they will need to be changed too. plamo2 has similar logic as mamba_mixer2, so it should work after applying similar changes phi4flash looks quite different, though. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Jul 31, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . I tried to run both plamo2 and phi4flash on main (not the PR branch) and they both failed to run. I think for now we should keep the out-of-place allocation for compatibility, because I can't check the correctness if we keep only the in-place update path. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cyang49 force-pushed the pr_mamba2_vstack branch
from 3873218 to b165a18 Compare July 31, 2025 16:50 cyang49 requested a review
from WoosukKwon as a code owner July 31, 2025 16:50 Copy link Contributor Author cyang49 commented Jul 31, 2025 Fixed models that calls the affected kernels plamo2 lm_eval --model vllm --model_args pretrained=pfnet/plamo
-2.1-2b-cpt,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,max_model_len=8192 --batch_size auto --trust_remote_code --cache_re
quests true --tasks gsm8k vllm (pretrained=pfnet/plamo-2.1-2b-cpt,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,max_model_len=8192,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5982|± |0.0135|
| | |strict-match | 5|exact_match|↑ |0.5951|± |0.0135| phi4flash VLLM_ATTENTION_BACKEND=DIFFERENTIAL_FLASH_ATTN lm_eval --model vllm --model_args pretrained=microsoft/Phi-4-mini-flash-reasoning,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,enable_prefix_caching=False,enable_chunked_prefill=False,max_model_len=8192 --batch_size auto --trust_remote_code --cache_requests true --tasks gsm8k vllm (pretrained=microsoft/Phi-4-mini-flash-reasoning,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,enable_prefix_caching=False,enable_chunked_prefill=False,max_model_len=8192,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5239|± |0.0138|
| | |strict-match | 5|exact_match|↑ |0.4837|± |0.0138| 🎉 1 nopperl reacted with hooray emoji All reactions 🎉 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth approved these changes Jul 31, 2025 View reviewed changes tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 31, 2025 tlrmchlsmth enabled auto-merge (squash) July 31, 2025 19:34 auto-merge was automatically disabled August 1, 2025 18:13 Head branch was pushed to by a user without write access cyang49 force-pushed the pr_mamba2_vstack branch
from b165a18 to 19651f2 Compare August 1, 2025 18:13 cyang49 added 5 commits August 1, 2025 21:13 preallocate SSM output tensor to avoid d2d copy overhead … 3cee43c Signed-off-by: Chih-Chieh Yang <[email protected]> clean up … 6d962a5 Signed-off-by: Chih-Chieh-Yang <[email protected]> keep only in-place update of output … 6035133 Signed-off-by: Chih-Chieh-Yang <[email protected]> mamba2 interface changes for plamo2 … 9632f0f Signed-off-by: Chih-Chieh-Yang <[email protected]> interface change phi4flash … af5f089 Signed-off-by: Chih-Chieh-Yang <[email protected]> fix CI test and mamba_mixer … 97c9a70 Signed-off-by: Chih-Chieh-Yang <[email protected]> cyang49 force-pushed the pr_mamba2_vstack branch
from d59b61d to 97c9a70 Compare August 2, 2025 01:13 Hide details View details vllm-bot merged commit b690e34 into vllm-project : main Aug 2, 2025 39 of 45 checks passed Uh oh! There was an error while loading. Please reload this page . cyang49 deleted the pr_mamba2_vstack branch August 4, 2025 11:53 wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 8223083 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: shuw <[email protected]> juuice-lee pushed a commit
to juuice-lee/vllm-moe.code
that referenced
this pull request Aug 5, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 4b81d26 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 871bde5 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … c1ce688 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: x22x22 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 07e421d …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: x22x22 <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 4b27371 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> nopperl reviewed Aug 8, 2025 View reviewed changes vllm/model_executor/layers/mamba/ops/mamba_ssm.py @@ -206,7 +206,7 @@ def selective_state_update(state, dt_softplus=False, state_batch_indices=None, pad_slot_id=PAD_SLOT_ID, preallocated_ssm_out =None): out =None): Copy link Contributor nopperl Aug 8, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think out needs to be a required argument now, because it is not allocated within the function anymore. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author cyang49 Aug 8, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good point. Will address this in an upcoming PR Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 nopperl reacted with thumbs up emoji All reactions 👍 1 reaction jingyu-ml pushed a commit
to jingyu-ml/vllm
that referenced
this pull request Aug 8, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 71eb0f9 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: jingyu <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … ee9e5c1 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> noamgat pushed a commit
to noamgat/vllm
that referenced
this pull request Aug 9, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … c7e2edf …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Noam Gat <[email protected]> yyihuang pushed a commit
to yyihuang/vllm
that referenced
this pull request Aug 11, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 2e68882 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Avery Yingyi Huang <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 02e862a …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 49a0a42 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 5f66814 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … f79d7fa …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … d9e22d3 …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … 1b7d42b …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe… … e3f090e …ad ( vllm-project#21075 )
Signed-off-by: Chih-Chieh Yang <[email protected]>
Signed-off-by: Chih-Chieh-Yang <[email protected]> nopperl mentioned this pull request Aug 31, 2025 [V1] v1 engine + full CUDA graph support for PLaMo2 #23998 Merged 5 tasks Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:40
|
58eee5f2e05b74eb2cb1a3bbda9c04df4805e4cc
|
https://github.com/vllm-project/vllm/pull/20000
| false | true | false | true |
PERF: optimization, Optimization, optimization | TEST: test, test, Test
|
Copy link Contributor vadiklyutiy commented Jun 24, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. Purpose to_py_obj is called inside transformers tokenizer.decode . In our case to_py_obj convert list to list what is useless. Avoid to_py_obj to speedup decoding. Test Plan Change is really short and trivial. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions avoid useless list-to-list conversion … b74bbba Signed-off-by: Vadim Gimpelson <[email protected]> Copy link github-actions bot commented Jun 24, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jun 24, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @vadiklyutiy , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a performance optimization to the token decoding process within the tokenizer utility. By intelligently selecting the most efficient decoding method available, it aims to eliminate redundant data conversions and improve the speed of token decoding. Highlights Performance Optimization : Modified the decode_tokens function in vllm/transformers_utils/tokenizer.py to prioritize using the _decode method of the tokenizer if it exists, falling back to the standard decode method otherwise. This change aims to bypass an unnecessary list-to-list conversion ( to_py_obj ) that can occur during token decoding, thereby improving performance. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions gemini-code-assist bot reviewed Jun 24, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review The pull request modifies the decode_tokens function in vllm/transformers_utils/tokenizer.py to use the _decode method of the tokenizer if it exists, falling back to the decode method if it doesn't. This is done to potentially speed up decoding by avoiding unnecessary list-to-list conversions. The change is concise and seems reasonable. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/transformers_utils/tokenizer.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . 22quinn reviewed Jun 24, 2025 View reviewed changes Copy link Collaborator 22quinn left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Interesting. This is the code pointer for anyone interested. I feel this kind of optimization is better done in huggingface. I dig a bit and found there was already some discussion and optimization in huggingface/transformers#36885 Have you measured the speedup for this PR? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 22quinn added
the performance Performance-related issues label Jun 24, 2025 Copy link Contributor Author vadiklyutiy commented Jun 24, 2025 Interesting. This is the code pointer for anyone interested. I feel this kind of optimization is better done in huggingface. I dig a bit and found there was already some discussion and optimization in huggingface/transformers#36885 Have you measured the speedup for this PR? @22quinn you are right. This change from my backlog and I did it some time ago. I measured performance without patch to HF you mentioned and that saw a lot of to_py_obj calls for every list element. I will check performance improvement on the latest version. Maybe after HF patch performance improvement too minor to worry about it. Thank you for pointing this out. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator ProExpertProg commented Jun 26, 2025 Congrats on #20000 ! 😄 1 22quinn reacted with laugh emoji All reactions 😄 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details vllm-bot merged commit 58eee5f into vllm-project : main Aug 2, 2025 15 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented Aug 2, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Oops accidentally merged this PR, feel free to revert if there's a problem with it All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author vadiklyutiy commented Aug 3, 2025 @DarkLight1337 Should I create PR to revert it? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented Aug 3, 2025 Is this change still relevant? If not then yeah let's revert All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author vadiklyutiy commented Aug 3, 2025 Ok, let's me collect up to date numbers. Mentioned above merge to transformers improved performance but not fully - there is still some overhead. With specific numbers we can decide. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 35f1408 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: shuw <[email protected]> juuice-lee pushed a commit
to juuice-lee/vllm-moe.code
that referenced
this pull request Aug 5, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 9b76219 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … fc6cbb1 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 8cb05d1 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: x22x22 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … fa14d61 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: x22x22 <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 91186e5 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> jingyu-ml pushed a commit
to jingyu-ml/vllm
that referenced
this pull request Aug 8, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 6e204de …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: jingyu <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 2d6070c …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> noamgat pushed a commit
to noamgat/vllm
that referenced
this pull request Aug 9, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 2349d3d …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Noam Gat <[email protected]> yyihuang pushed a commit
to yyihuang/vllm
that referenced
this pull request Aug 11, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 5372242 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Avery Yingyi Huang <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 66782d4 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 4b814e9 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 8ffd112 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 8fb256d …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> Copy link Contributor Author vadiklyutiy commented Aug 22, 2025 @DarkLight1337 Sorry for late reply. Ran Qwen-2.5-VL-3B with high load on latest main with and without this PR. decode_token itself speed up is sufficient - 28%. But after transformers optimizations we don't spend a lot of time in it. E2E improving is tiny - around 0.2%. Please let me know what do you think. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented Aug 22, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . OK, let's revert this PR then. Thanks for investgating this! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 added a commit
to DarkLight1337/vllm
that referenced
this pull request Aug 22, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … ec8ebfe …ist-to-list conversion ( vllm-project#20000 )"
This reverts commit 58eee5f .
Signed-off-by: DarkLight1337 <[email protected]> DarkLight1337 mentioned this pull request Aug 22, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless list-to-list conversion (#20000)" #23396 Merged 4 tasks Isotr0py pushed a commit
that referenced
this pull request Aug 23, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … b4e9fd8 …ist-to-list conversion ( #20000 )" ( #23396 )
Signed-off-by: DarkLight1337 <[email protected]> FFFfff1FFFfff pushed a commit
to FFFfff1FFFfff/my_vllm
that referenced
this pull request Aug 25, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … cb92141 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: FFFfff1FFFfff <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … f902dce …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 622bd37 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> juuice-lee pushed a commit
to juuice-lee/vllm-moe.code
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 84c70d4 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> xiao-llm pushed a commit
to xiao-llm/vllm
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … dd95e26 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Xiao Yu <[email protected]> xiao-llm pushed a commit
to xiao-llm/vllm
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 2b472fc …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Xiao Yu <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … cd0e40b …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … fbaa487 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> dumb0002 pushed a commit
to dumb0002/vllm
that referenced
this pull request Aug 28, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … f30ac74 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [PERF] Use faster way of decode in tokenizer: avoid useless list-to-l… … 04627e3 …ist conversion ( vllm-project#20000 )
Signed-off-by: Vadim Gimpelson <[email protected]> 2015aroras pushed a commit
to 2015aroras/vllm
that referenced
this pull request Aug 29, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 38f7e84 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> mengxingkongzhouhan pushed a commit
to mengxingkongzhouhan/vllm
that referenced
this pull request Aug 30, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 4eec518 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> mengxingkongzhouhan pushed a commit
to mengxingkongzhouhan/vllm
that referenced
this pull request Aug 30, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 1f5ccee …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> mengxingkongzhouhan pushed a commit
to mengxingkongzhouhan/vllm
that referenced
this pull request Aug 30, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … b20b3e1 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> mengxingkongzhouhan pushed a commit
to mengxingkongzhouhan/vllm
that referenced
this pull request Aug 30, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … fe798f2 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> mengxingkongzhouhan pushed a commit
to mengxingkongzhouhan/vllm
that referenced
this pull request Aug 30, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 4f2a849 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> nopperl pushed a commit
to pfnet/vllm
that referenced
this pull request Sep 3, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 5a917a8 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> 842974287 pushed a commit
to 842974287/vllm
that referenced
this pull request Sep 3, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 81e37d6 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Shiyan Deng <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Sep 3, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 1046c1c …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]> ekagra-ranjan pushed a commit
to ekagra-ranjan/vllm
that referenced
this pull request Sep 4, 2025 Revert "[PERF] Use faster way of decode in tokenizer: avoid useless l… … 4f93bc2 …ist-to-list conversion ( vllm-project#20000 )" ( vllm-project#23396 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Ekagra Ranjan <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:44
|
eefbf4a68b7b0a5b8364a59647906be1b7f043e2
|
https://github.com/vllm-project/vllm/pull/22036
| true | true | false | true |
LM_EVAL: lm_eval, gsm8k, gsm8k | PERF: improvement | TEST: Test, test, test
|
Copy link Collaborator yewentao256 commented Jul 31, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose Using vectorization utils to reshape_and_cache_flash and get performance improvement Test Acc lm_eval --model vllm --model_args " pretrained=Qwen/Qwen3-30B-A3B-FP8,max_model_len=32768,enforce_eager=True " --trust_remote_code --tasks gsm8k --num_fewshot 5 --batch_size auto | Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | | ----- | ------: | ---------------- | -----: | ----------- | --- | -----: | --- | -----: | | gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.8173 | ± | 0.0106 | | | | strict-match | 5 | exact_match | ↑ | 0.8870 | ± | 0.0087 | # main | Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | | ----- | ------: | ---------------- | -----: | ----------- | --- | -----: | --- | -----: | | gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.8173 | ± | 0.0106 | | | | strict-match | 5 | exact_match | ↑ | 0.8870 | ± | 0.0087 | pytest test_cache.py -x
==================== test session starts ====================
platform linux -- Python 3.12.3, pytest-8.4.0, pluggy-1.6.0
rootdir: /home/wentao/vllm-source
configfile: pyproject.toml
plugins: asyncio-1.0.0, anyio-4.9.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 1102 items
test_cache.py ....................................... [ 3%]
...................................s...s...s...s...s. [ 8%]
..s...s...s...s...s...s...s...s...s...s...s...s...s.. [ 13%]
..................................................... [ 17%]
....................s...s...s...s...s...s...s...s...s [ 22%]
...s...s...s...s...s...s...s...s...s................. [ 27%]
..................................................... [ 32%]
..................................................... [ 37%]
..................................................... [ 42%]
..................................................... [ 46%]
..................................................... [ 51%]
..................................................... [ 56%]
..................................................... [ 61%]
..................................................... [ 66%]
..................................................... [ 70%]
...........s.ss.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss.s [ 75%]
ssss.ss.ss.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss.sssss. [ 80%]
ss.ss.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss [ 85%]
.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss.sssss.ss.ss.ssss [ 90%]
s.ss.ss.sssss.s...................................... [ 94%]
..................................................... [ 99%]
sss [100%]
======= 901 passed, 201 skipped in 349.21s (0:05:49) ======== Performance python benchmark_reshape_and_cache_flash.py num_tokens layout Old Run (µs) New Run (µs) Change (%) 2 HND 10.326 8.323 -19.4% 🚀 4 HND 10.440 8.355 -20.0% 🚀 8 HND 10.356 8.344 -19.4% 🚀 16 HND 10.330 8.372 -19.0% 🚀 32 HND 10.345 8.348 -19.3% 🚀 64 HND 10.454 8.354 -20.1% 🚀 128 HND 10.397 8.370 -19.5% 🚀 256 HND 14.431 10.375 -28.1% 🚀 512 HND 24.809 20.137 -18.8% 🚀 1024 HND 51.389 45.196 -12.1% 🚀 2048 HND 96.466 77.908 -19.2% 🚀 4096 HND 175.695 147.068 -16.3% 🚀 8192 HND 336.814 279.106 -17.1% 🚀 16384 HND 668.001 547.169 -18.1% 🚀 32768 HND 1320.570 1082.070 -18.1% 🚀 65536 HND 2605.930 2149.950 -17.5% 🚀 2 NHD 10.371 6.649 -35.9% 🚀 4 NHD 10.337 6.407 -38.0% 🚀 8 NHD 10.346 6.338 -38.7% 🚀 16 NHD 10.352 6.394 -38.2% 🚀 32 NHD 10.350 7.416 -28.3% 🚀 64 NHD 10.341 7.305 -29.4% 🚀 128 NHD 10.349 7.614 -26.4% 🚀 256 NHD 14.401 10.363 -28.0% 🚀 512 NHD 25.955 15.084 -41.9% 🚀 1024 NHD 49.264 30.690 -37.7% 🚀 2048 NHD 93.674 53.726 -42.6% 🚀 4096 NHD 172.364 101.030 -41.4% 🚀 8192 NHD 333.329 195.911 -41.2% 🚀 16384 NHD 665.351 385.012 -42.1% 🚀 32768 NHD 1308.720 762.607 -41.7% 🚀 65536 NHD 2587.800 1519.310 -41.3% 🚀 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 3 mgoin, ProExpertProg, and minosfuture reacted with hooray emoji All reactions 🎉 3 reactions yewentao256 added 2 commits July 31, 2025 17:15 optimize reshape and cache flash kernel … ec2e746 Signed-off-by: yewentao256 <[email protected]> add benchmark script … 1d25423 Signed-off-by: yewentao256 <[email protected]> mergify bot added
the performance Performance-related issues label Jul 31, 2025 gemini-code-assist bot reviewed Jul 31, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request optimizes the reshape_and_cache_flash CUDA kernel by using vectorization, which results in significant performance improvements. The changes look good, but there is a critical correctness issue. The new implementation assumes a contiguous memory layout for the (num_heads, head_size) dimensions in the KV cache, which is only true for the NHD layout. This breaks support for the HND layout, which is also a supported configuration. I've provided a detailed comment with a suggested fix to address this. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions csrc/cache_kernels.cu Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link github-actions bot commented Jul 31, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . yewentao256 added 4 commits July 31, 2025 17:45 Fallback HND … 8c4484e Signed-off-by: yewentao256 <[email protected]> HND optimize … 27546f6 Signed-off-by: yewentao256 <[email protected]> optimize HND and update benchmark script … 8896ba3 Signed-off-by: yewentao256 <[email protected]> update comments … f850fb5 Signed-off-by: yewentao256 <[email protected]> Copy link Collaborator robertgshaw2-redhat commented Aug 1, 2025 wow, nice work 🚀 1 yewentao256 reacted with rocket emoji All reactions 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Aug 1, 2025 mgoin approved these changes Aug 1, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, vectorize_with_alignment should deal with uneven shapes and existing CI should cover this. I'll make sure to unblock a full run just in case Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 yewentao256 reacted with thumbs up emoji All reactions 👍 1 reaction Hide details View details mgoin merged commit eefbf4a into vllm-project : main Aug 1, 2025 106 of 108 checks passed Uh oh! There was an error while loading. Please reload this page . wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 2d1176c …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: shuw <[email protected]> mgoin mentioned this pull request Aug 5, 2025 Update rms_norm_kernel by removing redundant global memory loads #22134 Closed juuice-lee pushed a commit
to juuice-lee/vllm-moe.code
that referenced
this pull request Aug 5, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 77fb21a …2036 )
Signed-off-by: yewentao256 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … af2e1b0 …2036 )
Signed-off-by: yewentao256 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 243072a …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: x22x22 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 70a4ebc …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: x22x22 <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 0776d55 …2036 )
Signed-off-by: yewentao256 <[email protected]> jingyu-ml pushed a commit
to jingyu-ml/vllm
that referenced
this pull request Aug 8, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 4a21190 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: jingyu <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 8854ac4 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> noamgat pushed a commit
to noamgat/vllm
that referenced
this pull request Aug 9, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 417c8f8 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Noam Gat <[email protected]> yyihuang pushed a commit
to yyihuang/vllm
that referenced
this pull request Aug 11, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 677f751 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Avery Yingyi Huang <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 8883b90 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 4d7adb0 …2036 )
Signed-off-by: yewentao256 <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 9f6eea7 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 59b5f69 …2036 )
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> yewentao256 mentioned this pull request Aug 24, 2025 Vectorize RMSNorm CUDA kernel #22602 Open epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 018781e …2036 )
Signed-off-by: yewentao256 <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 64db329 …2036 )
Signed-off-by: yewentao256 <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Perf] Optimize reshape_and_cache_flash CUDA Kernel ( vllm-project#2… … 27c54dd …2036 )
Signed-off-by: yewentao256 <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:48
|
ac45c44d98e77f30e47b8fb69134f4635183070d
|
https://github.com/vllm-project/vllm/pull/21837
| true | true | true | true |
LM_EVAL: gsm8k | PERF: optimization | SERVING: vllm serve, serve | TEST: Test, Test, test
|
Copy link Contributor varun-sundar-rabindranath commented Jul 29, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose DeepEPHighThroughput All2All kernel when used with DeepSeek models dispatches the tokens in 16bit datatype and quantizes after dispatch. This is inefficient for 2 reasons, More data in communication More data to quantize after dispatch This PR introduces a fix to quantize to fp8 first and then dispatch the fp8 tensor. Test Plan canhazgpu run -g2 -- pytest -s tests/kernels/moe/test_modular_kernel_combinations.py canhazgpu run -g2 -- pytest tests/kernels/moe/test_deepep_deepgemm_moe.py VLLM_ALL2ALL_BACKEND="deepep_high_throughput" VLLM_USE_DEEP_GEMM=1 canhazgpu run -g 2 -- vllm serve Qwen/Qwen3-30B-A3B-FP8 --trust-remote-code --enable-expert-parallel --data-parallel-size 2 --port 9010 --no-enable-prefix-caching Test Result All tests pass for canhazgpu run -g2 -- pytest -s tests/kernels/moe/test_modular_kernel_combinations.py All tests pass for canhazgpu run -g2 -- pytest tests/kernels/moe/test_deepep_deepgemm_moe.py |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.86|± |0.0349|
| | |strict-match | 5|exact_match|↑ | 0.94|± |0.0239| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Jul 29, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the deepseek Related to DeepSeek models label Jul 29, 2025 gemini-code-assist bot reviewed Jul 29, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces a performance optimization for MoE layers using DeepEPHighThroughput with block quantization (e.g., for DeepSeek models). The change correctly modifies the logic to quantize the activations before dispatching them, which reduces communication overhead and is more efficient. The implementation is clean and effective. The condition for pre-quantization is correctly expanded to include block-quantized cases, and the call to the quantization kernel is updated to pass the correct parameters, which also fixes a potential bug that the logical change would have otherwise introduced. Overall, the changes look solid and align well with the stated purpose. I couldn't find any issues of high or critical severity. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author varun-sundar-rabindranath commented Jul 29, 2025 @tlrmchlsmth @bnellnm PTAL ! Thanks 🙌 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor bnellnm commented Jul 29, 2025 So we still go down the "quantize after" codepath if the quantization is per-tensor? Is there some reason that quantization can't happen beforehand in that case also? Or does DeepEP not support that? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author varun-sundar-rabindranath commented Jul 29, 2025 So we still go down the "quantize after" codepath if the quantization is per-tensor? Is there some reason that quantization can't happen beforehand in that case also? Or does DeepEP not support that? It is a DeepEP limitation. DeepEP doesn't support that. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor bnellnm commented Jul 29, 2025 So we still go down the "quantize after" codepath if the quantization is per-tensor? Is there some reason that quantization can't happen beforehand in that case also? Or does DeepEP not support that? It is a DeepEP limitation. DeepEP doesn't support that. Would it make sense to fake it out by replicating the scale and then resizing/truncating them after the dispatch? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author varun-sundar-rabindranath commented Jul 30, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . So we still go down the "quantize after" codepath if the quantization is per-tensor? Is there some reason that quantization can't happen beforehand in that case also? Or does DeepEP not support that? It is a DeepEP limitation. DeepEP doesn't support that. Would it make sense to fake it out by replicating the scale and then resizing/truncating them after the dispatch? I went back and looked at the DeepEP documentation here The documentation suggests that only block-quantization is supported. But the function seemingly also supports per-token quantization (We have unit test that have been passing - look here ). However, it looks like we are an assert away in the DeepEP repo from crashing. To be safe, I have updated the code to support only block-quantization for the "Quant-then-Dispatch" block. For any other quantization we will "Dispatch-then-Quant" cc @tlrmchlsmth 👍 1 bnellnm reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth approved these changes Jul 31, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 31, 2025 tlrmchlsmth enabled auto-merge (squash) July 31, 2025 14:33 Varun Sundar Rabindranath added 2 commits August 1, 2025 06:32 quant then dispatch … ed5a03f Signed-off-by: Varun Sundar Rabindranath <[email protected]> Remove per-act-token-quant … fcf2fe9 Signed-off-by: Varun Sundar Rabindranath <[email protected]> auto-merge was automatically disabled August 1, 2025 06:33 Head branch was pushed to by a user without write access varun-sundar-rabindranath force-pushed the varun/ht-quant-dispatch-ordering branch
from 80cb125 to fcf2fe9 Compare August 1, 2025 06:33 varun-sundar-rabindranath changed the title [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant and then Dispatch [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before Dispatch Aug 1, 2025 Hide details View details vllm-bot merged commit ac45c44 into vllm-project : main Aug 1, 2025 41 of 44 checks passed Uh oh! There was an error while loading. Please reload this page . wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … b787b9a … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: shuw <[email protected]> juuice-lee pushed a commit
to juuice-lee/vllm-moe.code
that referenced
this pull request Aug 5, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … a171dbf … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … e53887f … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … fc8f4fa … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: x22x22 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 6058cc5 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: x22x22 <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 7f0c9e2 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> jingyu-ml pushed a commit
to jingyu-ml/vllm
that referenced
this pull request Aug 8, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 506a08a … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: jingyu <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 024bae4 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> noamgat pushed a commit
to noamgat/vllm
that referenced
this pull request Aug 9, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … e62f88f … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Noam Gat <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 02137be … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … d35b39e … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 0c4f6b9 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 998c08f … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 4a6adca … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 4c75149 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before… … 445bac5 … Dispatch ( vllm-project#21837 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:52
|
8aa1485fcff7be3e42300c0615ee0f3f3cbce9a8
|
https://github.com/vllm-project/vllm/pull/21761
| false | true | true | true |
PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, vllm serve, Serving | TEST: test, test, test
|
Copy link Collaborator LucasWilkinson commented Jul 28, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose Currently using the hybrid kv-cache with llama4s chunked local attention causes a latency ~2ms since when the hybrid kv-cache manager is used we end up with 3 ChunkedLocalAttention kv-cache spec groups. We end up with the following groups: (FullAttention x 12) (ChunkedLocalAttention x 12) (ChunkedLocalAttention x 12) (ChunkedLocalAttention x 12) This results in attn metadata and local virtual batches for the local layers being constructed 3 times adding latency: Enabled:
vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruct -tp 4 --trust-remote-code --max-model-len 16384 --port 8081 --disable-log-requests
============ Serving Benchmark Result ============
Successful requests: 100
Benchmark duration (s): 9.11
Total input tokens: 6299
Total generated tokens: 12509
Request throughput (req/s): 10.97
Output token throughput (tok/s): 1372.85
Total Token throughput (tok/s): 2064.16
---------------Time to First Token----------------
Mean TTFT (ms): 61.84
Median TTFT (ms): 61.53
P99 TTFT (ms): 106.66
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 28.46
Median TPOT (ms): 29.17
P99 TPOT (ms): 30.99
---------------Inter-token Latency----------------
Mean ITL (ms): 28.44
Median ITL (ms): 28.65
P99 ITL (ms): 38.05
==================================================
Disabled:
vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruct -tp 4 --trust-remote-code --max-model-len 16384 --port 8081 --disable-log-requests --disable-hybrid-kv-cache-manager
============ Serving Benchmark Result ============
Successful requests: 100
Benchmark duration (s): 8.84
Total input tokens: 6299
Total generated tokens: 12297
Request throughput (req/s): 11.32
Output token throughput (tok/s): 1391.49
Total Token throughput (tok/s): 2104.26
---------------Time to First Token----------------
Mean TTFT (ms): 58.69
Median TTFT (ms): 59.23
P99 TTFT (ms): 90.65
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 26.48
Median TPOT (ms): 27.32
P99 TPOT (ms): 28.90
---------------Inter-token Latency----------------
Mean ITL (ms): 26.55
Median ITL (ms): 26.54
P99 ITL (ms): 39.40
================================================== Test Plan see: #21707 Test Result see: #21707 (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions disable chunked local attention by default … dd3ccf5 Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson requested review from simon-mo , WoosukKwon , youkaichao , robertgshaw2-redhat , mgoin , tlrmchlsmth , houseroad and hmellor as code owners July 28, 2025 14:27 Copy link github-actions bot commented Jul 28, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the llama Related to Llama models label Jul 28, 2025 gemini-code-assist bot reviewed Jul 28, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request correctly addresses a performance regression by disabling chunked local attention with the hybrid KV cache manager by default, while providing an environment variable to re-enable it. The implementation is sound. My only suggestion is to update a comment to more accurately reflect that the change is a performance optimization, which will improve code clarity and maintainability. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/config.py Comment on lines +4780 to +4781 # Hybrid KV cache manager is not yet supported with chunked # local attention. Copy link Contributor gemini-code-assist bot Jul 28, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This comment is slightly misleading as it suggests the feature is unsupported, whereas the PR description and warning log indicate it's a performance regression. To improve clarity for future maintenance, it would be better to state the performance-related reason for disabling it. Suggested change # Hybrid KV cache manager is not yet supported with chunked # local attention . # Disable hybrid KV cache manager with chunked local attention # due to a performance regression . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member mgoin Jul 28, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I kind of agree with Gemini here, although you say this in your log Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin approved these changes Jul 28, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM for the moment Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/config.py Comment on lines +4780 to +4781 # Hybrid KV cache manager is not yet supported with chunked # local attention. Copy link Member mgoin Jul 28, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I kind of agree with Gemini here, although you say this in your log Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/config.py self.scheduler_config.disable_hybrid_kv_cache_manager = True elif \ not envs.VLLM_ALLOW_CHUNKED_LOCAL_ATTN_WITH_HYBRID_KV_CACHE: logger.warning( Copy link Member mgoin Jul 28, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: warning_once Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed labels Jul 28, 2025 Copy link Member mgoin commented Jul 28, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Merging to solve the regression since we have better solutions on the way All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details mgoin merged commit 8aa1485 into vllm-project : main Jul 28, 2025 78 checks passed Uh oh! There was an error while loading. Please reload this page . liuyumoye pushed a commit
to liuyumoye/vllm
that referenced
this pull request Jul 31, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 47a6c89 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> sarckk mentioned this pull request Aug 2, 2025 [Bug]: [v1/core/block_pool.py] Assertion Failure: prev_block.block_hash is not None #21992 Open Copy link Collaborator luccafong commented Aug 2, 2025 @LucasWilkinson will we reduce metadata creation with refactoring? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LucasWilkinson commented Aug 2, 2025 thats the plan; we are working towards: https://vllm-dev.slack.com/archives/C07R5Q1Q2BB/p1753727605258469?thread_ts=1753202489.248869&cid=C07R5Q1Q2BB but that will be a followup PR All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … e636a83 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … b15f7a3 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: x22x22 <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 024f5de …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … be60f7a …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 94a185c …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … bda1d57 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> noamgat pushed a commit
to noamgat/vllm
that referenced
this pull request Aug 9, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … b0119fd …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Noam Gat <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 3712e58 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 63e3c03 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 46cb6ce …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … afd3f01 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 1e8cef7 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> zhewenl pushed a commit
to zhewenl/vllm
that referenced
this pull request Aug 28, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … 47bfbc4 …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Perf] Disable chunked local attention by default with llama4 ( vllm-p… … c1a10df …roject#21761 )
Signed-off-by: Lucas Wilkinson <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:49:55
|
61b8cea3b42feab021d506e9143551de18f9165c
|
https://github.com/vllm-project/vllm/pull/21137
| true | true | false | true |
LM_EVAL: lm_eval, lm_eval, gsm8k | PERF: req/s, req/s, optimization | TEST: test, test, test
|
Copy link Collaborator LucasWilkinson commented Jul 17, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose Flash infer prefers host side CPU buffers in many cases, example: https://github.com/flashinfer-ai/flashinfer/blob/3c40456effae8b9c5b1a11c0d1e0594295b1a312/flashinfer/prefill.py#L1430-L1436 So we pass host side buffers (since #20466 we now have access to these) to reduce D2H transfers. Trace from main showing D2H transfers in plan Test Plan Test Result Accuracy Results VLLM_ATTENTION_BACKEND=FLASHINFER lm_eval --model vllm --model_args pretrained=met
a-llama/Meta-Llama-3-8B-Instruct --tasks gsm8k --batch_size auto
...
INFO 07-17 20:33:43 [cuda.py:253] Using FlashInfer backend on V1 engine.
...
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7536|± |0.0119|
| | |strict-match | 5|exact_match|↑ |0.7551|± |0.0118| Benchmark Results Benchmark Command: python benchmarks/benchmark_throughput.py --model meta-llama/Llama-3.2-3B-Instruct --dataset-name random --input-len 256 --output-len 128 --num-prompts < N > --seed 42 Results (3 runs per condition, mean ± standard error): num-prompts Main Branch (req/s) This PR (req/s) 1 1.58 ± 0.06 1.90 ± 0.03 8 13.06 ± 0.11 14.32 ± 0.21 16 26.00 ± 0.07 28.74 ± 0.13 32 47.84 ± 0.57 46.53 ± 1.57 64 76.14 ± 0.45 81.43 ± 3.43 128 116.99 ± 6.10 127.78 ± 7.50 256 164.45 ± 6.12 177.70 ± 3.88 Tested on NVIDIA B200 GPU with meta-llama/Llama-3.2-3B-Instruct (256→128 tokens) (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Jul 17, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added rocm Related to AMD ROCm speculative-decoding labels Jul 17, 2025 Copy link mergify bot commented Jul 17, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @LucasWilkinson . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added v1 needs-rebase labels Jul 17, 2025 gemini-code-assist bot reviewed Jul 17, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request is a significant and well-executed refactoring of the attention backend infrastructure. The primary goal of decoupling the metadata builders from the model runner has been achieved, which improves modularity and maintainability. The optimization for FlashInfer by preparing metadata on the CPU is a key improvement and has been implemented correctly. The introduction of CommonAttentionMetadata as a unified data structure is a solid design choice that simplifies the data flow to the attention backends. The refactoring of the speculative decoding logic, particularly in vllm/v1/spec_decode/eagle.py , to remove the Triton kernel in favor of a more readable PyTorch/NumPy implementation is a notable improvement. The addition of a comprehensive test suite in tests/v1/attention/test_attention_backends.py is excellent. It provides strong validation for the correctness of this large-scale refactoring by comparing various backends against a reference implementation under realistic conditions. Overall, the changes are of high quality and represent a positive step forward for the codebase. I have not identified any issues of high or critical severity. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LucasWilkinson force-pushed the lwilkinson/flash-infer-host-buffers branch
from 87ccacf to 8af5f3b Compare July 18, 2025 00:36 mergify bot removed
the needs-rebase label Jul 18, 2025 LucasWilkinson marked this pull request as ready for review July 18, 2025 03:54 LucasWilkinson requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners July 18, 2025 03:54 WoosukKwon reviewed Jul 18, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment BTW why don't we use Numpy instead of PyTorch CPU tensors? Except for some edge cases, Numpy is usually faster in my experience. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor fhl2000 commented Jul 18, 2025 Could we still pass the device tensors to Flashinfer's plan() rather than host tensors? Because we might want to support full cudagraph of Flashinfer in the future (currently implemented in #20059 in rough), which requires managing device-side persistent buffers that can be reused across different decode wrappers. Here, one decode wrapper corresponds to a runtime shape that needs to be captured. Also, if we pass the host tensors to the wrapper, it seems that H2D transfers still exist. If I remember correctly, Sglang's implementation overrides the plan functions that still pass host-side persistent buffers, and also explicitly avoids certain D2H transfers. Hope it's helpful! @LucasWilkinson All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LucasWilkinson commented Jul 18, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . BTW why don't we use Numpy instead of PyTorch CPU tensors? Except for some edge cases, Numpy is usually faster in my experience. Ive found going to and from numpy (i.e. .numpy() , torch::from_numpy can be a bit slow and only worth it if you are gonna do alot of ops; since FlashInfer ultimately wants torch tensors and for most of these theres only one or two ops per tensor im not sure its worth going to numpy; but I can scrub for tensors that are manipulated alot 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LucasWilkinson commented Jul 18, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Could we still pass the device tensors to Flashinfer's plan() rather than host tensors? Because we might want to support full cudagraph of Flashinfer in the future (currently implemented in #20059 in rough), which requires managing device-side persistent buffers that can be reused across different decode wrappers. Here, one decode wrapper corresponds to a runtime shape that needs to be captured. If you look in FlashInfer's BatchDecodeWithPagedKVCacheWrapper you'll see the buffers get copied in the cudagraph path regardless: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L892-L910 and will get copied to the host: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L925-L926 Also, if we pass the host tensors to the wrapper, it seems that H2D transfers still exist. Yes; however H2D transfers are preferred over D2H as they can be done in a non-blocking fashion and do force synchronization with GPU. For the build call we are trying to optimize the CPU overhead so the fire-and-forget nature of the H2D transfers is better then depending on D2H transfer. If I remember correctly, Sglang's implementation overrides the plan functions that still pass host-side persistent buffers, and also explicitly avoids certain D2H transfers. Thats effectively what this PR does; the CPU buffers in CommonAttentionMetadata are views into the gpu_model_runner s persistent input_batch host side tensors. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor fhl2000 commented Jul 18, 2025 If I remember correctly, Sglang's implementation overrides the plan functions that still pass host-side persistent buffers, Oh my bad! Sorry, I was saying they are passing the device-side buffers. If you look in FlashInfer's BatchDecodeWithPagedKVCacheWrapper you'll see the buffers get copied in the cudagraph path regardless: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L892-L910 and will get copied to the host: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L925-L926 I am wondering if we can override this plan function that lets the wrapper directly own the device-side persistent buffer from VLLM, and avoid any unnecessary copy (device-to-device or host-to-device)? At least for qo_indptr, which is equivalent to query_start_loc, we already have both cpu and gpu versions of it from common_attn_metadata, so we can just reuse them without any further copy. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LucasWilkinson commented Jul 18, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . If I remember correctly, Sglang's implementation overrides the plan functions that still pass host-side persistent buffers, Oh my bad! Sorry, I was saying they are passing the device-side buffers. If you look in FlashInfer's BatchDecodeWithPagedKVCacheWrapper you'll see the buffers get copied in the cudagraph path regardless: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L892-L910 and will get copied to the host: https://github.com/flashinfer-ai/flashinfer/blob/1e9a41ad7f0efc5989bb0a2bf7e954902c8c73af/flashinfer/decode.py#L925-L926 I am wondering if we can override this plan function that lets the wrapper directly own the device-side persistent buffer from VLLM, and avoid any unnecessary copy (device-to-device or host-to-device)? At least for qo_indptr, which is equivalent to query_start_loc, we already have both cpu and gpu versions of it from common_attn_metadata, so we can just reuse them without any further copy. Is this what you are referring to? https://github.com/sgl-project/sglang/blob/719b29f218a09642193c4bda2a7ffa32829d5604/python/sglang/srt/layers/attention/flashinfer_backend.py#L1229 ?; not that familiar with sglang. This is an interesting idea; thanks for sharing! Regardless, even in this overridden version they pass host side buffers ( https://github.com/sgl-project/sglang/blob/719b29f218a09642193c4bda2a7ffa32829d5604/python/sglang/srt/layers/attention/flashinfer_backend.py#L1334-L1336 ); so if we want to override plan in the future I think we would still want this PR as a stepping stone (and override plan in follow up PR). 👍 1 fhl2000 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Jul 18, 2025 Could you make sure to test the trtllm case in the flashinfer backend as well? Just want to make sure this choice is preferable for that backend as well if affected All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . fhl2000 mentioned this pull request Jul 22, 2025 [V1][CUDA] Full cudagraph support for FlashInfer #21367 Merged 4 tasks Copy link Collaborator Author LucasWilkinson commented Jul 23, 2025 @mgoin looks good 👍 I think we should land this since its a win and I can follow up if using numpy helps VLLM_LOGGING_LEVEL=INFO cVLLM_USE_TRTLLM_DECODE_ATTENTION=1 VLLM_ATTENTION_BACKEND=FLASHINFER_VLLM_V1 lm_eval --model vllm --model_args '{"pretrained": "meta-llama/Meta-Llama
-3-8B-Instruct"}' --tasks gsm8k --batch_size auto
...
WARNING 07-23 11:40:01 [flashinfer.py:140] Using TRTLLM decode attention (auto-detected).
...
vllm ({'pretrained': 'meta-llama/Meta-Llama-3-8B-Instruct'}), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7559|± |0.0118|
| | |strict-match | 5|exact_match|↑ |0.7574|± |0.0118| 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . LucasWilkinson added 4 commits July 23, 2025 11:44 host buffers … 6b18ffb Signed-off-by: Lucas Wilkinson <[email protected]>
Optimize V1 FlashInfer backend to use CPU host buffers
- Replace GPU-to-CPU transfers with direct CPU tensor construction
- Build planning tensors from existing CommonAttentionMetadata CPU buffers
- Reduce from 6x to 1x .cpu() calls during FlashInfer planning
- Fix test mocks to handle correct argument count
- Maintain compatibility with GPUModelRunner and FlashInfer V1 backend
Signed-off-by: Lucas Wilkinson <[email protected]>
dont transfer block table
Signed-off-by: Lucas Wilkinson <[email protected]>
optimize
Signed-off-by: Lucas Wilkinson <[email protected]> reorder imports … 599ee48 Signed-off-by: Lucas Wilkinson <[email protected]> cleanup … 4e07e01 Signed-off-by: Lucas Wilkinson <[email protected]> cleanup … 585548e Signed-off-by: Lucas Wilkinson <[email protected]> mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 23, 2025 mgoin approved these changes Jul 23, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good to me, thanks! After review the amount of work we have to do on the CPU is more than I expected, so looking forward to seeing full cg Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 fhl2000 reacted with thumbs up emoji All reactions 👍 1 reaction LucasWilkinson added 2 commits July 23, 2025 13:32 fix attention test … 701fdc0 Signed-off-by: Lucas Wilkinson <[email protected]> format … b087694 Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson force-pushed the lwilkinson/flash-infer-host-buffers branch
from 155e954 to b087694 Compare July 23, 2025 17:33 format … 9723f3d Signed-off-by: Lucas Wilkinson <[email protected]> mgoin enabled auto-merge (squash) July 24, 2025 00:54 Hide details View details vllm-bot merged commit 61b8cea into vllm-project : main Jul 24, 2025 67 of 69 checks passed Uh oh! There was an error while loading. Please reload this page . elvischenv mentioned this pull request Jul 24, 2025 [Bugfix] Fix workspace buffer None issue for Flashinfer TRTLLM Backend #21525 Merged 4 tasks avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 3e6afaf …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: avigny <[email protected]> wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 8b86ba2 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 841628b …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … d368f33 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 39d315c …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 9a7c08f …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 965d4ef …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 484d958 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]> BoyuanFeng pushed a commit
to BoyuanFeng/vllm
that referenced
this pull request Aug 14, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 6b0bc15 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … c3786d8 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … f22e665 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Attention] Optimize FlashInfer MetadataBuilder Build call ( vllm-proj… … 593f1b1 …ect#21137 )
Signed-off-by: Lucas Wilkinson <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:01
|
4fb56914c5f27ef062e10d44a0f79c6ceab382f9
|
https://github.com/vllm-project/vllm/pull/21116
| true | true | true | true |
LM_EVAL: gsm8k, gsm8k | PERF: ttft, TTFT, TTFT | SERVING: Serving, Serving | TEST: test, test, test
|
Copy link Contributor mickaelseznec commented Jul 17, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose For MLA models that have a q_lora_rank: fuse q_lora and kv_lora into the same matrix (avoids some traffic + one less kernel call). Also adds a implementation for layernorm to operate on strided input, this avoids memory copy. Test Plan Units tests added for strided layernorm. E2E testing & benchamrks results in this PR Test Result Accuracy main ( 20149d8 ) vllm (pretrained=deepseek-ai/DeepSeek-V3-0324,tensor_parallel_size=8,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.9469|± |0.0062| | | |strict-match | 5|exact_match|↑ |0.9454|± |0.0063| This PR: vllm (pretrained=deepseek-ai/DeepSeek-V3-0324,add_bos_token=true,tensor_parallel_size=8), gen_kwargs: (None), limit: 250.0, num_fewshot: None, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.952|± |0.0135| | | |strict-match | 5|exact_match|↑ |0.952|± |0.0135| Performance main ( 20149d8 ) venv ❯ python benchmarks/benchmark_serving.py --model deepseek-ai/DeepSeek-V3-0324 --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split.json INFO 07-15 17:16:08 [__init__.py:253] Automatically detected platform cuda. Namespace(backend='vllm', base_url=None, host='127.0.0.1', port=8000, endpoint='/v1/completions', dataset_name='sharegpt', dataset_path='ShareGPT_V3_unfiltered_cleaned_split.json', no_stream=False, max_concurrency=None, model='deepseek-ai/DeepSeek-V3-0324', tokenizer=None, use_beam_search=False, num_prompts=1000, logprobs=None, request_rate=inf, burstiness=1.0, seed=0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, custom_output_len=256, custom_skip_chat_template=False, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, random_input_len=1024, random_output_len=128, random_range_ratio=0.0, random_prefix_len=0, hf_subset=None, hf_split=None, hf_output_len=None, top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None) Starting initial single prompt test run... Initial test run completed. Starting main benchmark run... Traffic request rate: inf RPS. Burstiness factor: 1.0 (Poisson process) Maximum request concurrency: None 100%|██████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:58<00:00, 17.10it/s] ============ Serving Benchmark Result ============ Successful requests: 1000 Benchmark duration (s): 58.46 Total input tokens: 219171 Total generated tokens: 164272 Request throughput (req/s): 17.10 Output token throughput (tok/s): 2809.81 Total Token throughput (tok/s): 6558.65 ---------------Time to First Token---------------- Mean TTFT (ms): 8290.64 Median TTFT (ms): 7975.92 P99 TTFT (ms): 14349.76 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 177.57 Median TPOT (ms): 115.76 P99 TPOT (ms): 434.24 ---------------Inter-token Latency---------------- Mean ITL (ms): 98.84 Median ITL (ms): 66.80 P99 ITL (ms): 435.74 ================================================== This PR: venv ❯ python benchmarks/benchmark_serving.py --model deepseek-ai/DeepSeek-V3-0324 --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split.json INFO 07-17 10:27:38 [__init__.py:253] Automatically detected platform cuda. Namespace(backend='vllm', base_url=None, host='127.0.0.1', port=8000, endpoint='/v1/completions', dataset_name='sharegpt', dataset_path='ShareGPT_V3_unfiltered_cleaned_split.json', no_stream=False, max_concurrency=None, model='deepseek-ai/DeepSeek-V3-0324', tokenizer=None, use_beam_search=False, num_prompts=1000, logprobs=None, request_rate=inf, burstiness=1.0, seed=0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, custom_output_len=256, custom_skip_chat_template=False, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, random_input_len=1024, random_output_len=128, random_range_ratio=0.0, random_prefix_len=0, hf_subset=None, hf_split=None, hf_output_len=None, top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None) Starting initial single prompt test run... Initial test run completed. Starting main benchmark run... Traffic request rate: inf RPS. Burstiness factor: 1.0 (Poisson process) Maximum request concurrency: None 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:56<00:00, 17.63it/s] ============ Serving Benchmark Result ============ Successful requests: 1000 Benchmark duration (s): 56.72 Total input tokens: 219171 Total generated tokens: 165898 Request throughput (req/s): 17.63 Output token throughput (tok/s): 2925.10 Total Token throughput (tok/s): 6789.51 ---------------Time to First Token---------------- Mean TTFT (ms): 6917.92 Median TTFT (ms): 6629.26 P99 TTFT (ms): 12941.51 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 171.18 Median TPOT (ms): 108.68 P99 TPOT (ms): 461.18 ---------------Inter-token Latency---------------- Mean ITL (ms): 95.07 Median ITL (ms): 67.52 P99 ITL (ms): 431.03 ================================================== (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 4 mgoin, hj-mistral, LucasWilkinson, and simon-mo reacted with rocket emoji All reactions 🚀 4 reactions mickaelseznec requested review from tlrmchlsmth , WoosukKwon , mgoin and robertgshaw2-redhat as code owners July 17, 2025 10:36 Copy link github-actions bot commented Jul 17, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the deepseek Related to DeepSeek models label Jul 17, 2025 gemini-code-assist bot reviewed Jul 17, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces two significant optimizations: fusing the QKV projection for MLA models and implementing a strided LayerNorm kernel. The changes are well-implemented and should provide the performance benefits described. The fusion of Q-LoRA and KV-LoRA projections into a single matrix operation for DeepSeek-V2 models is a smart optimization that reduces kernel launch overhead and memory traffic. The introduction of MergedReplicatedLinear to handle this fusion is a clean way to extend the existing linear layer infrastructure. The addition of a strided layernorm implementation is crucial for the fusion to be effective, as it avoids expensive .contiguous() calls on tensor slices. The CUDA kernels have been updated correctly to handle the input_stride , and the PyTorch bindings are adjusted accordingly. The test suite has been properly extended to cover the new strided input case for the layernorm kernels, ensuring the correctness of the new implementation. Overall, this is a high-quality contribution that improves performance while maintaining code clarity and correctness. I have no major concerns. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions feat: add fused MLA QKV + strided layernorm … e3962ab Signed-off-by: Mickael Seznec <[email protected]> mickaelseznec force-pushed the mseznec/merged-qkv-and-strided-layernorm branch
from 75b3d50 to e3962ab Compare July 17, 2025 10:38 mgoin requested a review
from LucasWilkinson July 17, 2025 12:06 tlrmchlsmth reviewed Jul 17, 2025 View reviewed changes csrc/layernorm_kernels.cu Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat reviewed Jul 17, 2025 View reviewed changes vllm/model_executor/models/deepseek_v2.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . LucasWilkinson reviewed Jul 17, 2025 View reviewed changes vllm/model_executor/layers/linear.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . mickaelseznec added 2 commits July 17, 2025 14:06 review: stride->int64_t … 3f6b148 Signed-off-by: Mickael Seznec <[email protected]> pre-commit … 4f77a0d Signed-off-by: Mickael Seznec <[email protected]> Copy link Collaborator LucasWilkinson commented Jul 17, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Nice thanks for the contribution! Clean, simple and gives perf; the trifecta haha. Overall looks pretty good to me but I think one of the weight loading experts, i.e. @dsikka or @mgoin should take a look to make sure we dont break 4bit quantized models ❤️ 1 mickaelseznec reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . fix: better fallback in weight loader … 49a9b00 Signed-off-by: Mickael Seznec <[email protected]> yewentao256 reviewed Jul 17, 2025 View reviewed changes Copy link Collaborator yewentao256 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the work! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions csrc/layernorm_kernels.cu Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/layers/linear.py Comment on lines +423 to +424 from vllm.model_executor.layers.quantization.fp8 import ( Fp8LinearMethod, Fp8MoEMethod) Copy link Collaborator yewentao256 Jul 17, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Could we refactor the code, so that we can put import on top of the file without worrying about the circular import instead here? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author mickaelseznec Jul 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Well it's tricky, because FP8Linear already depends on Linear (which makes sense). I don't know how you'd like to proceed. I lazily copy/pasted from https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/linear.py#L787-L791 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator yewentao256 Jul 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yeah I am thinking, if A imports B, B imports A. We can have a base file C, move base things into C, so A imports C, B imports C as well. We don't need to do it right now in this pr if you don't wish, could be done by refactor in the future. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author mickaelseznec Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Sure! Here, the best way would probably be to rely on inheritance by defining (and overriding) methods like: QuantizeMethodBase.supports_block_quantization() However, I don't have a complete overview on all the supported cases and potential edge-cases and it might make this PR heavier than needed now. Happy to help with a following PR though :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator yewentao256 Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Sounds great, certainly you can do that in another pr Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mickaelseznec changed the title feat: add fused MLA QKV + strided layernorm [perf] Add fused MLA QKV + strided layernorm Jul 18, 2025 mickaelseznec added 2 commits July 18, 2025 13:12 review: fewer magic numbers … b6f3455 Signed-off-by: Mickael Seznec <[email protected]> fix: pre-commit … d1be02d Signed-off-by: Mickael Seznec <[email protected]> mgoin approved these changes Jul 21, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice work! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Merge branch 'main' into mseznec/merged-qkv-and-strided-layernorm 070dfa4 mgoin enabled auto-merge (squash) July 21, 2025 18:38 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 21, 2025 Hide details View details vllm-bot merged commit 4fb5691 into vllm-project : main Jul 22, 2025 106 of 108 checks passed Uh oh! There was an error while loading. Please reload this page . xuechendi mentioned this pull request Jul 22, 2025 [BUGFIX] deepseek-v2-lite failed due to fused_qkv_a_proj name update #21414 Merged 4 tasks yeqcharlotte pushed a commit
to yeqcharlotte/vllm
that referenced
this pull request Jul 23, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 37ec8cb Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> zixi-qi pushed a commit
to zixi-qi/vllm
that referenced
this pull request Jul 23, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 46b75f4 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: qizixi <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 7c6c84c Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> benchislett mentioned this pull request Jul 30, 2025 [Bugfix] Fix MTP weight loading #21941 Merged avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … da8f8fe Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: avigny <[email protected]> wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 994dd51 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 95d77b5 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 4402c98 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> fxmarty-amd mentioned this pull request Aug 6, 2025 [Bugfix] Add missing packed_modules_mapping to DeepseekV2ForCausalLM #22352 Merged npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 2e941f0 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … bb2b8ee Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 3c47ab0 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … b771731 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> benchislett mentioned this pull request Aug 14, 2025 [Model] Support deepseek with eagle #21086 Merged 4 tasks diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 52f0b84 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> cjackal mentioned this pull request Aug 25, 2025 [Bug]: DeepSeek-R1 AWQ model loading is not possible in v0.10.0 or later. #23530 Open 1 task epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … 7b35796 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [perf] Add fused MLA QKV + strided layernorm ( vllm-project#21116 ) … c7e4502 Signed-off-by: Mickael Seznec <[email protected]>
Co-authored-by: mgoin <[email protected]> cjackal mentioned this pull request Aug 30, 2025 DeepSeek fix: awq x mergedreplicatedlinear #23764 Open 5 tasks Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:06
|
ed25054577f7abca2aee32a5290200c4a1aed561
|
https://github.com/vllm-project/vllm/pull/21222
| false | true | false | true |
PERF: optimization | TEST: test, test, test
|
Copy link Contributor Jialin commented Jul 19, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose Most of the block_pool operators are on critical path forward path is hard blocked by kv block allocation decode cycle end is hard blocked by kv block free In this PR, we're focusing on further optimization these 2 operators. Bulk popleft instead of popleft n times Originally, in block_pool.get_new_blocks, we popped blocks one at a time, which would triggered the second block to fake head connections (which are unnecessary operations as the second block might be popped right after this). As we knew total number of blocks to pop ahead, we could simply introduce popleft_n for buck popleft. Overall, the number link list operations to linked list of popleft_n would only be half of n popleft. Bulk append instead of append n times Similar, in block_pool.free_blocks, we invoke append one at a time. Introducing bulk append would also cut link list operations by half. Test Plan Evaluate with benchmark scripts Evaluate with benchmark_blockpoll New Unit Test for append_n and popleft_n are added Test Result benchmark scripts Get new blocks improved from 0.15ms to 0.008ms Free new blocks improved from 33us to 9us After Before benchmark_blockpool As expected, get_blocks and free_blocks times are cut in half. After Before (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Jialin requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners July 19, 2025 09:39 Copy link github-actions bot commented Jul 19, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author Jialin commented Jul 19, 2025 resolve #21141 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Jul 19, 2025 gemini-code-assist bot reviewed Jul 19, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces popleft_n and append_n methods to FreeKVCacheBlockQueue for bulk operations, optimizing get_new_blocks and free_blocks in BlockPool . Benchmark results show significant improvements. To enhance robustness, I've suggested materializing the ordered_blocks iterable to a list in free_blocks to prevent potential OOM errors. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/core/block_pool.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Jialin mentioned this pull request Jul 18, 2025 [Performance]: Opportunities to speed up BlockPool processing #21141 Open 5 tasks DarkLight1337 requested a review
from heheda12345 July 19, 2025 12:19 Jialin force-pushed the blockpool branch
from a3253a5 to a3042bd Compare July 20, 2025 10:12 njhill reviewed Jul 20, 2025 View reviewed changes vllm/v1/core/kv_cache_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/kv_cache_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/block_pool.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/block_pool.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Jialin mentioned this pull request Jul 21, 2025 [Core] Minimize number of dict lookup in _maybe_evict_cached_block #21281 Merged 4 tasks njhill reviewed Jul 21, 2025 View reviewed changes vllm/v1/core/kv_cache_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/block_pool.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/block_pool.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Jialin force-pushed the blockpool branch
from 073075f to ca9fca3 Compare July 21, 2025 22:14 houseroad added performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed labels Jul 21, 2025 houseroad reviewed Jul 22, 2025 View reviewed changes vllm/v1/core/kv_cache_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . houseroad reviewed Jul 22, 2025 View reviewed changes vllm/v1/core/kv_cache_utils.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . houseroad approved these changes Jul 22, 2025 View reviewed changes Copy link Collaborator houseroad left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good to me. Impressive results, and two nits to consider to address. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 Jialin reacted with heart emoji All reactions ❤️ 1 reaction Jialin added 7 commits July 21, 2025 22:20 Introduce popleft_n and append_n in FreeKVCacheBlockQueue … 9353288 Signed-off-by: Jialin Ouyang <[email protected]> Fix free_blocks to correctly iterate ordered_blocks twice … 7dd32ff Signed-off-by: Jialin Ouyang <[email protected]> Materialize iterable instead of using itertools.tee … d62f3e8 Signed-off-by: Jialin Ouyang <[email protected]> Address comments … a7b16ba Signed-off-by: Jialin Ouyang <[email protected]> Address comments (further simplify implementation and avoid list iter… … 429e723 …ations)
Signed-off-by: Jialin Ouyang <[email protected]> Added a TODO to clean up incr_ref and decr_ref … 3655119 Signed-off-by: Jialin Ouyang <[email protected]> Address comments … ad59a94 Signed-off-by: Jialin Ouyang <[email protected]> Jialin force-pushed the blockpool branch
from ca9fca3 to ad59a94 Compare July 22, 2025 05:23 houseroad enabled auto-merge (squash) July 22, 2025 05:23 njhill approved these changes Jul 22, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks @Jialin ! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details vllm-bot merged commit ed25054 into vllm-project : main Jul 22, 2025 64 of 66 checks passed Uh oh! There was an error while loading. Please reload this page . yeqcharlotte pushed a commit
to yeqcharlotte/vllm
that referenced
this pull request Jul 23, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 4420ad5 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> zixi-qi pushed a commit
to zixi-qi/vllm
that referenced
this pull request Jul 23, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 40ab4c4 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: qizixi <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … cf5038f …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 40dcc2e …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: avigny <[email protected]> wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … e28b77c …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … a1cdc67 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … a7521ad …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 22a3904 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … aedd951 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 231c183 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 5081f27 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 7ad8303 …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … 01377bf …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Core] Introduce popleft_n and append_n in FreeKVCacheBlockQueue to f… … b8e251c …urther optimize block_pool ( vllm-project#21222 )
Signed-off-by: Jialin Ouyang <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:11
|
a32237665df876fcb51196dc209e8aff9fd89d29
|
https://github.com/vllm-project/vllm/pull/21245
| false | true | true | true |
PERF: benchmark run without override min length or logit bias, we still see noticeable cost coming from MinTokensLogitsProcessor and LogitBiasLogitsProcessor. We found that it's due to inefficient needs_update tagging which would be tagged to True whenever there're new requests added to the batch. In this diff, we would tag needs_update to True, if new added request had customized min_token config a request with min_token config got popped Test Plan Rerun the benchmark. # vLLM Serving, profiling | SERVING: vllm serve, Serving, serve | TEST: test, test, test
|
Copy link Contributor Jialin commented Jul 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Purpose Fix update checks in MinTokensLogitsProcessor and LogitBiasLogitsProcessor. For a benchmark run without override min length or logit bias, we still see noticeable cost coming from MinTokensLogitsProcessor and LogitBiasLogitsProcessor. We found that it's due to inefficient needs_update tagging which would be tagged to True whenever there're new requests added to the batch. In this diff, we would tag needs_update to True, if new added request had customized min_token config a request with min_token config got popped Test Plan Rerun the benchmark. # vLLM Serving
export VLLM_USE_MODELSCOPE=False;
export VLLM_TORCH_PROFILER_DIR=~/vllm_profile; # for profiling
vllm serve facebook/opt-125m \
--swap-space 16 \
--disable-log-requests \
--host :: \
--dtype float16
# Capture traces
vllm bench serve \
--dataset-name random \
--model facebook/opt-125m \
--served-model-name facebook/opt-125m \
--random-input-len 700 \
--random-output-len 1 \
--endpoint /v1/completions \
--ignore-eos \
--host localhost \
--port 8000 \
--request-rate 200 \
--num-prompts 100 Test Result Confirmed the cost from MinTokensLogitsProcessor and LogitBiasLogitsProcessor is mostly gone. After Before (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 yeqcharlotte reacted with thumbs up emoji All reactions 👍 1 reaction Jialin requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners July 20, 2025 09:15 Copy link github-actions bot commented Jul 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Jul 20, 2025 gemini-code-assist bot reviewed Jul 20, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request optimizes update checks in MinTokensLogitsProcessor . I've added a suggestion to improve the maintainability of the new logic by making it more explicit and avoiding a side effect in a conditional statement. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/sample/logits_processor.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Jialin changed the title [Core] Optimize update checks in MinTokensLogitsProcessor [Core] Optimize update checks in LogitsProcessor Jul 20, 2025 Jialin force-pushed the min_token branch
from 9f1d4fd to b300005 Compare July 20, 2025 10:18 Copy link Member njhill commented Jul 20, 2025 Thanks @Jialin . I think I had similar logic in the my original impl of these LPs here https://github.com/vllm-project/vllm/pull/13360/files#diff-d01f143e1af472f24af24842cb879907ce624e6e5c977935e944545240723529R51 and hadn't realized that had been changed. cc @afeldman-nm ❤️ 1 Jialin reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . houseroad approved these changes Jul 21, 2025 View reviewed changes Copy link Collaborator houseroad left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good to me. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/sample/logits_processor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . houseroad added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 21, 2025 Jialin force-pushed the min_token branch
from b300005 to 5142da8 Compare July 21, 2025 22:03 houseroad added
the performance Performance-related issues label Jul 21, 2025 houseroad enabled auto-merge (squash) July 21, 2025 22:08 Jialin added 2 commits July 21, 2025 22:24 Optimize update checks in MinTokensLogitsProcessor … d0baa38 Signed-off-by: Jialin Ouyang <[email protected]> Apply updates to LogitBiasLogitsProcessor as well … b3026ed Signed-off-by: Jialin Ouyang <[email protected]> auto-merge was automatically disabled July 22, 2025 05:25 Head branch was pushed to by a user without write access Jialin force-pushed the min_token branch
from 5142da8 to b3026ed Compare July 22, 2025 05:25 Hide details View details vllm-bot merged commit a322376 into vllm-project : main Jul 22, 2025 63 of 65 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Contributor afeldman-nm commented Jul 22, 2025 Thanks @Jialin ! I think this was probably my bad so thanks for the fix ❤️ 1 Jialin reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author Jialin commented Jul 22, 2025 Thanks @Jialin ! I think this was probably my bad so thanks for the fix No worry :) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . yeqcharlotte pushed a commit
to yeqcharlotte/vllm
that referenced
this pull request Jul 23, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … f96ca50 Signed-off-by: Jialin Ouyang <[email protected]> zixi-qi pushed a commit
to zixi-qi/vllm
that referenced
this pull request Jul 23, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 25d0c72 Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: qizixi <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … f9839e4 Signed-off-by: Jialin Ouyang <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … b5ee4f7 Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: avigny <[email protected]> wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 1e52328 Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … daab1aa Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … b6c32b5 Signed-off-by: Jialin Ouyang <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 87908a8 Signed-off-by: Jialin Ouyang <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … fad4dd9 Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 97ee62f Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 0ca234a Signed-off-by: Jialin Ouyang <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 2ffbc24 Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 34bfe4b Signed-off-by: Jialin Ouyang <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Core] Optimize update checks in LogitsProcessor ( vllm-project#21245 ) … 9405819 Signed-off-by: Jialin Ouyang <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:16
|
e7b204268132cb775c139574c1ff4ad7e15c8f66
|
https://github.com/vllm-project/vllm/pull/21334
| true | true | false | true |
LM_EVAL: lm_eval, lm_eval, gsm8k | PERF: optimization, improvement | TEST: Test, Test, test
|
Copy link Contributor minosfuture commented Jul 21, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose This reverts commit 9fb2d22 to fix #21322 Test Plan pytest -v -s tests/models/multimodal/generation/test_maverick.py lm_eval maverick Test Result UT passed lm_eval result: local-chat-completions (model=meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8,base_url= http://127.0.0.1:8000/v1/chat/completions,num_concurrent=32 ), gen_kwargs: (None), limit: 200.0, num_fewshot: 5, batch_size: 1 Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.93 ± 0.0181 strict-match 5 exact_match ↑ 0.92 ± 0.0192 (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 roywei reacted with thumbs up emoji All reactions 👍 1 reaction Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 51db38e …UTLASS MoE ( vllm-project#20762 )"
This reverts commit 9fb2d22 .
Signed-off-by: Ming Yang <[email protected]> minosfuture requested review from tlrmchlsmth , WoosukKwon , mgoin and robertgshaw2-redhat as code owners July 21, 2025 21:30 Copy link github-actions bot commented Jul 21, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the performance Performance-related issues label Jul 21, 2025 gemini-code-assist bot reviewed Jul 21, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request reverts a previous performance improvement to fix a correctness issue. The changes are mostly removing code related to the optimization. However, I've identified a critical issue where the revert breaks CUDA graph compatibility by creating new tensors inside a function that can be captured by a CUDA graph. This will cause benchmarks and potentially other features relying on CUDA graphs to fail. I've provided detailed comments and code suggestions across multiple files to address this by re-introducing the practice of passing stride tensors as arguments, which was the behavior before the original performance-enhancing change. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/kernels/moe/test_cutlass_moe.py @@ -207,10 +207,6 @@ def run_8_bit(moe_tensors: MOETensors8Bit, 'topk_ids': topk_ids, 'w1_scale': moe_tensors.w1_scale, 'w2_scale': moe_tensors.w2_scale, Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment To align with the proposed fix for CUDA graph compatibility, the stride tensors need to be passed to cutlass_moe_fp8 for testing. 'w2_scale' : moe_tensors . w2_scale , 'ab_strides1' : moe_tensors . ab_strides1 , 'ab_strides2' : moe_tensors . ab_strides2 , 'c_strides1' : moe_tensors . c_strides1 , 'c_strides2' : moe_tensors . c_strides2 , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/kernels/moe/test_cutlass_moe.py @@ -444,11 +440,6 @@ def test_run_cutlass_moe_fp8( expert_map[start:end] = list(range(num_local_experts)) expert_map = torch.tensor(expert_map, dtype=torch.int32, device="cuda") Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors need to be created for the test to be consistent with the proposed fix for CUDA graph compatibility. expert_map = torch . tensor ( expert_map , dtype = torch . int32 , device = "cuda" ) ab_strides1 = torch . full (( e , ), k , device = "cuda" , dtype = torch . int64 ) ab_strides2 = torch . full (( e , ), n , device = "cuda" , dtype = torch . int64 ) c_strides1 = torch . full (( e , ), 2 * n , device = "cuda" , dtype = torch . int64 ) c_strides2 = torch . full (( e , ), k , device = "cuda" , dtype = torch . int64 ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/kernels/moe/test_cutlass_moe.py Comment on lines +451 to +452 a1q_scale, None, workspace13, workspace2, None, mt.a.dtype, per_act_token, per_out_channel, False) Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be passed to run_cutlass_moe_fp8 to align with the proposed fix for CUDA graph compatibility. a1q_scale , None , ab_strides1 , ab_strides2 , c_strides1 , c_strides2 , workspace13 , workspace2 , None , mt . a . dtype , per_act_token , per_out_channel , False ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tests/kernels/moe/test_pplx_cutlass_moe.py Comment on lines 126 to 131 experts = CutlassExpertsFp8(num_local_experts, out_dtype, per_act_token, per_out_ch, ab_strides1, ab_strides2, c_strides1, c_strides2, num_dispatchers=num_dispatchers, use_batched_format=True) Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors need to be created and passed to CutlassExpertsFp8 for the test to be consistent with the proposed fix for CUDA graph compatibility. You'll also need to re-introduce intermediate_dim which was removed in this PR. intermediate_dim = w2 . shape [ 2 ] ab_strides1 = torch . full (( num_local_experts , ), hidden_dim , device = "cuda" , dtype = torch . int64 ) ab_strides2 = torch . full (( num_local_experts , ), intermediate_dim , device = "cuda" , dtype = torch . int64 ) c_strides1 = torch . full (( num_local_experts , ), 2 * intermediate_dim , device = "cuda" , dtype = torch . int64 ) c_strides2 = torch . full (( num_local_experts , ), hidden_dim , device = "cuda" , dtype = torch . int64 ) experts = CutlassExpertsFp8 ( num_local_experts , out_dtype , per_act_token , per_out_ch , ab_strides1 , ab_strides2 , c_strides1 , c_strides2 , num_dispatchers = num_dispatchers , use_batched_format = True ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/fused_moe/cutlass_moe.py @@ -34,10 +35,6 @@ def run_cutlass_moe_fp8( w2_scale: Optional[torch.Tensor], a1q_scale: Optional[torch.Tensor], a2_scale: Optional[torch.Tensor], Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment To maintain CUDA graph compatibility, the stride tensors should be passed as arguments instead of being created inside this function. Please add them back to the function signature. Suggested change a2_scale : Optional [ torch . Tensor ], a2_scale : Optional [ torch . Tensor ], ab_strides1 : torch . Tensor , ab_strides2 : torch . Tensor , c_strides1 : torch . Tensor , c_strides2 : torch . Tensor , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions 4 hidden conversations Load more… vllm/model_executor/layers/fused_moe/cutlass_moe.py @@ -329,10 +332,6 @@ def cutlass_moe_fp8( topk_ids: torch.Tensor, w1_scale: torch.Tensor, w2_scale: torch.Tensor, Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be passed as arguments here as well to maintain CUDA graph compatibility. w2_scale : torch . Tensor , ab_strides1 : torch . Tensor , ab_strides2 : torch . Tensor , c_strides1 : torch . Tensor , c_strides2 : torch . Tensor , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/fused_moe/cutlass_moe.py @@ -403,10 +391,6 @@ def cutlass_moe_fp8( out_dtype=a.dtype, per_act_token_quant=per_act_token, per_out_ch_quant=per_out_ch, Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be passed to the CutlassExpertsFp8 constructor. per_out_ch_quant = per_out_ch , ab_strides1 = ab_strides1 , ab_strides2 = ab_strides2 , c_strides1 = c_strides1 , c_strides2 = c_strides2 , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py @@ -859,21 +859,6 @@ def process_weights_after_loading(self, layer: torch.nn.Module) -> None: layer.w13_weight_scale = torch.nn.Parameter(max_w13_scales, requires_grad=False) Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be pre-computed and stored here to be passed to the MoE kernel. This is necessary for CUDA graph compatibility. Suggested change device = layer . w13_weight . device # ab_strides1 and c_strides2 are the same self . ab_strides1_c_strides2 = torch . full (( layer . local_num_experts , ), layer . hidden_size , device = device , dtype = torch . int64 ) self . ab_strides2 = torch . full (( layer . local_num_experts , ), layer . intermediate_size_per_partition , device = device , dtype = torch . int64 ) self . c_strides1 = torch . full (( layer . local_num_experts , ), 2 * layer . intermediate_size_per_partition , device = device , dtype = torch . int64 ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py @@ -896,10 +881,6 @@ def select_gemm_impl( moe.in_dtype, self.input_quant.strategy == QuantizationStrategy.TOKEN, self.weight_quant.strategy == QuantizationStrategy.CHANNEL, Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be passed to the CutlassExpertsFp8 constructor. self . weight_quant . strategy == QuantizationStrategy . CHANNEL , ab_strides1 = self . ab_strides1_c_strides2 , ab_strides2 = self . ab_strides2 , c_strides1 = self . c_strides1 , c_strides2 = self . ab_strides1_c_strides2 , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py @@ -968,10 +948,6 @@ def apply( expert_map=None if self.disable_expert_map else expert_map, w1_scale=layer.w13_weight_scale, w2_scale=layer.w2_weight_scale, Copy link Contributor gemini-code-assist bot Jul 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The stride tensors should be passed to cutlass_moe_fp8 . w2_scale = layer . w2_weight_scale , ab_strides1 = self . ab_strides1_c_strides2 , ab_strides2 = self . ab_strides2 , c_strides1 = self . c_strides1 , c_strides2 = self . ab_strides1_c_strides2 , Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions houseroad approved these changes Jul 21, 2025 View reviewed changes Copy link Collaborator houseroad left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for reverting the original PR to help recover the trunk health. This will unblock our code sync as well. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator houseroad commented Jul 21, 2025 cc: @ElizaWszola , @tlrmchlsmth , @mgoin , @robertgshaw2-redhat this is blocking our internal work, so need to revert for now to unblock. Sorry about the inconvenience, and happy to help on landing the fixed version. Also if forward-fix is easy to land, we are happy to switch to that as well. :-) 👍 2 mgoin and minosfuture reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . houseroad enabled auto-merge (squash) July 21, 2025 22:04 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 21, 2025 houseroad added
the llama Related to Llama models label Jul 21, 2025 mgoin added this to the v0.10.0 milestone Jul 22, 2025 mgoin approved these changes Jul 22, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Okay let's revert for now. Thanks for identifying this Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 houseroad and ElizaWszola reacted with thumbs up emoji ❤️ 1 minosfuture reacted with heart emoji All reactions 👍 2 reactions ❤️ 1 reaction simon-mo disabled auto-merge July 22, 2025 04:48 Hide details View details simon-mo merged commit e7b2042 into vllm-project : main Jul 22, 2025 109 of 111 checks passed Uh oh! There was an error while loading. Please reload this page . minosfuture added a commit
to minosfuture/vllm
that referenced
this pull request Jul 22, 2025 Reapply "[Performance] Performance improvements in non-blockwise fp8 … … 2f39358 …CUTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
This reverts commit e7b2042 . minosfuture added a commit
to minosfuture/vllm
that referenced
this pull request Jul 23, 2025 Reapply "[Performance] Performance improvements in non-blockwise fp8 … … 291c923 …CUTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
This reverts commit e7b2042 .
The original PR vllm-project#20762 is:
Authored-by: ElizaWszola <[email protected]>
Signed-off-by: Ming Yang <[email protected]> zixi-qi pushed a commit
to zixi-qi/vllm
that referenced
this pull request Jul 23, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … e780c7d …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: qizixi <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 663b3f1 …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … c24051b …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: avigny <[email protected]> wenscarl pushed a commit
to wenscarl/vllm
that referenced
this pull request Aug 4, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 5cf3120 …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: shuw <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 5418f5a …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 4c1cd4d …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 26384dc …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 45b2eb2 …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 680fa6d …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 19f1d60 …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … a397d4d …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 28, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … c9e26e8 …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 Revert "[Performance] Performance improvements in non-blockwise fp8 C… … 27299ac …UTLASS MoE ( vllm-project#20762 ) ( vllm-project#21334 )
Signed-off-by: Ming Yang <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:20
|
0ec82edda59aaf5cf3b07aadf4ecce1aa1131add
|
https://github.com/vllm-project/vllm/pull/21079
| false | true | true | true |
PERF: throughput, Throughput, throughput | SERVING: vllm serve, vllm serve, serve | TEST: Test, test, test
|
Copy link Contributor hj-mistral commented Jul 16, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose Move fill ops inside align sum kernel to reduce bubbles. cumsum buffer does not need to be filled with zero. we can use blockscan to do the prefix sum This PR also moves the triton inits into the kernel to make it a fair comparison and also ensure the kernel is usable in the future as a fallback if required. Benchmarks Main branch FP16:
# vllm bench throughput --model Qwen/Qwen3-30B-A3B --load-format dummy --input-len 1000 --output-len 100
Throughput: 43.75 requests/s, 48024.34 total tokens/s, 4374.91 output tokens/s
Total num prompt tokens: 997723
Total num output tokens: 100000
FP8:
# vllm bench throughput --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100
Throughput: 41.04 requests/s, 45049.17 total tokens/s, 4103.87 output tokens/s
Total num prompt tokens: 997723
Total num output tokens: 100000
Kernel benchmark:
# python3 benchmarks/kernels/benchmark_moe_align_block_size.py
Running correctness check...
✅ VLLM implementation works with 64 experts!
✅ Triton and VLLM implementations match.
moe-align-block-size-performance:
num_tokens num_experts topk VLLM Triton
0 1.0 16.0 1.0 16.448000 23.040000
1 1.0 16.0 2.0 16.432000 23.104001
2 1.0 16.0 8.0 16.448000 23.040000
3 1.0 64.0 1.0 21.600001 25.984000
4 1.0 64.0 2.0 21.792000 26.048001
5 1.0 64.0 8.0 21.824000 25.952000
6 1.0 224.0 1.0 23.680000 40.288001
7 1.0 224.0 2.0 23.680000 40.320002
8 1.0 224.0 8.0 23.712000 40.383998
9 1.0 256.0 1.0 24.607999 43.136001
10 1.0 256.0 2.0 24.639999 43.104000
11 1.0 256.0 8.0 24.639999 43.200001
12 1.0 280.0 1.0 25.248000 45.407999
13 1.0 280.0 2.0 25.248000 45.343999
14 1.0 280.0 8.0 25.248000 45.440000
15 1.0 512.0 1.0 31.136001 69.151998
16 1.0 512.0 2.0 31.328000 69.119997
17 1.0 512.0 8.0 31.296000 69.215998
18 16.0 16.0 1.0 16.511999 23.296000
19 16.0 16.0 2.0 16.608000 23.520000
20 16.0 16.0 8.0 17.856000 24.351999
21 16.0 64.0 1.0 21.792000 26.400000
22 16.0 64.0 2.0 21.792000 26.656000
23 16.0 64.0 8.0 22.143999 27.424000
24 16.0 224.0 1.0 23.871999 41.503999
25 16.0 224.0 2.0 23.903999 41.600000
26 16.0 224.0 8.0 24.032000 41.152000
27 16.0 256.0 1.0 24.768000 43.088000
28 16.0 256.0 2.0 24.831999 43.136001
29 16.0 256.0 8.0 24.928000 43.391999
30 16.0 280.0 1.0 25.152000 45.968000
31 16.0 280.0 2.0 25.184000 46.080001
32 16.0 280.0 8.0 25.343999 46.271998
33 16.0 512.0 1.0 31.264000 69.343999
34 16.0 512.0 2.0 31.328000 69.504000
35 16.0 512.0 8.0 31.456001 69.888003
36 256.0 16.0 1.0 19.200001 25.312001
37 256.0 16.0 2.0 22.624001 28.576000
38 256.0 16.0 8.0 18.528000 45.184001
39 256.0 64.0 1.0 23.104001 28.416000
40 256.0 64.0 2.0 24.831999 29.023999
41 256.0 64.0 8.0 20.256000 33.535998
42 256.0 224.0 1.0 24.256000 42.367999
43 256.0 224.0 2.0 24.000000 42.943999
44 256.0 224.0 8.0 24.256000 45.952000
45 256.0 256.0 1.0 25.119999 44.224001
46 256.0 256.0 2.0 24.960000 44.192001
47 256.0 256.0 8.0 25.984000 47.488000
48 256.0 280.0 1.0 25.312001 46.239998
49 256.0 280.0 2.0 25.536001 47.327999
50 256.0 280.0 8.0 26.432000 49.568001
51 256.0 512.0 1.0 31.488001 69.824003
52 256.0 512.0 2.0 31.392001 69.856003
53 256.0 512.0 8.0 32.671999 71.712002
54 4096.0 16.0 1.0 20.128001 68.896003
55 4096.0 16.0 2.0 22.720000 114.367999
56 4096.0 16.0 8.0 36.256000 378.015995
57 4096.0 64.0 1.0 21.856001 39.391998
58 4096.0 64.0 2.0 24.639999 51.872000
59 4096.0 64.0 8.0 41.216001 121.360000
60 4096.0 224.0 1.0 26.368000 50.976001
61 4096.0 224.0 2.0 29.023999 56.607999
62 4096.0 224.0 8.0 45.504000 78.304000
63 4096.0 256.0 1.0 27.071999 51.968001
64 4096.0 256.0 2.0 29.824000 58.944002
65 4096.0 256.0 8.0 45.568001 78.368001
66 4096.0 280.0 1.0 27.295999 53.056002
67 4096.0 280.0 2.0 30.272000 59.648000
68 4096.0 280.0 8.0 43.264002 80.095999
69 4096.0 512.0 1.0 33.824001 73.600002
70 4096.0 512.0 2.0 35.551999 77.776000
71 4096.0 512.0 8.0 49.024001 98.591998 This PR FP16:
#vllm bench throughput --model Qwen/Qwen3-30B-A3B --load-format dummy --input-len 1000 --output-len 100
Throughput: 43.94 requests/s, 48234.94 total tokens/s, 4394.09 output tokens/s
Total num prompt tokens: 997723
Total num output tokens: 100000
FP8:
#vllm bench throughput --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100
Throughput: 41.26 requests/s, 45294.95 total tokens/s, 4126.26 output tokens/s
Total num prompt tokens: 997723
Total num output tokens: 100000
Kernel benchmark:
# python3 benchmarks/kernels/benchmark_moe_align_block_size.py
Running correctness check...
✅ VLLM implementation works with 64 experts!
✅ Triton and VLLM implementations match.
moe-align-block-size-performance:
num_tokens num_experts topk VLLM Triton
0 1.0 16.0 1.0 17.472001 27.488001
1 1.0 16.0 2.0 17.600000 30.304000
2 1.0 16.0 8.0 17.696001 30.880000
3 1.0 64.0 1.0 25.760001 31.296000
4 1.0 64.0 2.0 25.855999 31.168001
5 1.0 64.0 8.0 25.823999 31.488001
6 1.0 224.0 1.0 21.536000 44.544000
7 1.0 224.0 2.0 21.344000 44.799998
8 1.0 224.0 8.0 21.407999 44.736002
9 1.0 256.0 1.0 22.080000 47.616001
10 1.0 256.0 2.0 21.568000 47.392000
11 1.0 256.0 8.0 21.760000 47.711998
12 1.0 280.0 1.0 21.952000 49.632002
13 1.0 280.0 2.0 22.336001 49.984001
14 1.0 280.0 8.0 22.048000 49.952000
15 1.0 512.0 1.0 25.888000 75.071998
16 1.0 512.0 2.0 25.952000 75.328000
17 1.0 512.0 8.0 25.952000 75.007997
18 16.0 16.0 1.0 17.600000 27.295999
19 16.0 16.0 2.0 17.600000 28.352000
20 16.0 16.0 8.0 18.912001 29.696001
21 16.0 64.0 1.0 25.696000 31.184000
22 16.0 64.0 2.0 25.632000 30.688001
23 16.0 64.0 8.0 25.952000 30.944001
24 16.0 224.0 1.0 21.312000 45.855999
25 16.0 224.0 2.0 21.183999 45.791999
26 16.0 224.0 8.0 21.536000 45.440000
27 16.0 256.0 1.0 21.792000 47.359999
28 16.0 256.0 2.0 21.760000 47.584001
29 16.0 256.0 8.0 21.760000 47.807999
30 16.0 280.0 1.0 22.048000 50.271999
31 16.0 280.0 2.0 21.888001 50.464001
32 16.0 280.0 8.0 22.336001 50.624002
33 16.0 512.0 1.0 25.664000 74.975997
34 16.0 512.0 2.0 25.696000 75.039998
35 16.0 512.0 8.0 25.952000 75.135998
36 256.0 16.0 1.0 20.320000 29.088000
37 256.0 16.0 2.0 23.871999 32.543998
38 256.0 16.0 8.0 17.600000 49.279999
39 256.0 64.0 1.0 26.784001 32.448001
40 256.0 64.0 2.0 28.384000 32.127999
41 256.0 64.0 8.0 18.912001 37.535999
42 256.0 224.0 1.0 21.536000 46.720002
43 256.0 224.0 2.0 21.695999 47.488000
44 256.0 224.0 8.0 21.856001 50.175998
45 256.0 256.0 1.0 22.336001 48.703998
46 256.0 256.0 2.0 21.952000 48.351999
47 256.0 256.0 8.0 23.072001 51.711999
48 256.0 280.0 1.0 22.240000 50.783999
49 256.0 280.0 2.0 22.752000 52.000001
50 256.0 280.0 8.0 23.808001 54.639999
51 256.0 512.0 1.0 26.208000 75.744003
52 256.0 512.0 2.0 26.335999 75.103998
53 256.0 512.0 8.0 26.656000 77.215999
54 4096.0 16.0 1.0 19.168001 72.672002
55 4096.0 16.0 2.0 22.112001 117.183998
56 4096.0 16.0 8.0 37.087999 382.703990
57 4096.0 64.0 1.0 20.352000 43.423999
58 4096.0 64.0 2.0 23.424000 55.712000
59 4096.0 64.0 8.0 42.016000 125.568002
60 4096.0 224.0 1.0 23.264000 55.744000
61 4096.0 224.0 2.0 26.912000 60.864002
62 4096.0 224.0 8.0 44.704001 81.919998
63 4096.0 256.0 1.0 24.383999 56.448001
64 4096.0 256.0 2.0 27.327999 63.104004
65 4096.0 256.0 8.0 44.319998 82.496002
66 4096.0 280.0 1.0 23.808001 57.824001
67 4096.0 280.0 2.0 27.424000 64.576000
68 4096.0 280.0 8.0 41.792002 83.967999
69 4096.0 512.0 1.0 27.744001 79.135999
70 4096.0 512.0 2.0 30.479999 83.328001
71 4096.0 512.0 8.0 45.536000 103.808001 Test Result pytest tests/kernels/moe/test_moe_align_block_size.py - PASSED (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 2 mgoin and xuanyu-mistral reacted with heart emoji All reactions ❤️ 2 reactions Copy link github-actions bot commented Jul 16, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . hj-mistral force-pushed the hj-align-kernel branch
from 2f3cc21 to 67295ab Compare July 16, 2025 22:08 gemini-code-assist bot reviewed Jul 16, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request aims to speed up MoE alignment kernels by replacing a sequential prefix sum with a parallel version using cub::BlockScan and by moving some tensor initializations from Python into the CUDA kernel to reduce kernel launch overhead. While these changes are effective for performance, I've identified a critical correctness issue in the new parallel prefix sum implementation. It does not correctly handle cases where the number of experts exceeds the number of threads in the CUDA block (1024), which would lead to incorrect calculations. The existing tests do not cover this scenario. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions csrc/moe/moe_align_sum_kernels.cu Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Jul 16, 2025 cc @yewentao256 👍 1 yewentao256 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . yewentao256 reviewed Jul 17, 2025 View reviewed changes Copy link Collaborator yewentao256 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the work! Could you also please benchmark the performance (E2E throughput + kernel latency) and make sure all unit test passes? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/fused_moe/moe_align_block_size.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author hj-mistral commented Jul 17, 2025 Thanks for the work! Could you also please benchmark the performance (E2E throughput + kernel latency) and make sure all unit test passes? Any documentation to follow on how to run both? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . hj-mistral added 3 commits July 17, 2025 12:43 Speed up align sum kernels … 3cd55fd Signed-off-by: Himanshu Jaju <[email protected]> assert num_exp < 1024 … f6ef4eb Signed-off-by: Himanshu Jaju <[email protected]> whitespace … c898aab Signed-off-by: Himanshu Jaju <[email protected]> hj-mistral force-pushed the hj-align-kernel branch
from b5ee67e to c898aab Compare July 17, 2025 12:43 Copy link Collaborator yewentao256 commented Jul 17, 2025 Any documentation to follow on how to run both? Throughput(fp16)
vllm bench throughput --model Qwen/Qwen3-30B-A3B --load-format dummy --input-len 1000 --output-len 100
Throughput(fp8)
vllm bench throughput --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100 vllm-source/benchmarks/kernels/benchmark_moe_align_block_size.py vllm-source/tests/kernels/moe/test_moe_align_block_size.py 👍 1 hj-mistral reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the performance Performance-related issues label Jul 18, 2025 Some changes … be95db1 Signed-off-by: Himanshu Jaju <[email protected]> hj-mistral force-pushed the hj-align-kernel branch
from a8140c6 to be95db1 Compare July 18, 2025 16:11 Copy link Contributor Author hj-mistral commented Jul 18, 2025 Any documentation to follow on how to run both? Throughput(fp16)
vllm bench throughput --model Qwen/Qwen3-30B-A3B --load-format dummy --input-len 1000 --output-len 100
Throughput(fp8)
vllm bench throughput --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100 vllm-source/benchmarks/kernels/benchmark_moe_align_block_size.py vllm-source/tests/kernels/moe/test_moe_align_block_size.py All done and added to description, ptal :) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . hj-mistral commented Jul 18, 2025 View reviewed changes csrc/moe/moe_align_sum_kernels.cu int expert_offset = (i - 1) % experts_per_warp; expert_count = shared_counts[warp_idx * experts_per_warp + expert_offset]; // Compute prefix sum over token counts per expert using BlockScan = cub::BlockScan<int32_t, 1024>; Copy link Contributor Author hj-mistral Jul 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment For reviewer: this is what helps this kernel become faster even though its doing more ops now. Unsure how to do this for the small_kernel, but if there's a way we can do this as a follow up PR :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions hj-mistral marked this pull request as ready for review July 18, 2025 17:00 hj-mistral requested review from tlrmchlsmth and WoosukKwon as code owners July 18, 2025 17:00 hj-mistral changed the title [wip] Speed up align sum kernels [perf] Speed up align sum kernels Jul 18, 2025 yewentao256 approved these changes Jul 18, 2025 View reviewed changes Copy link Collaborator yewentao256 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good to me, thanks for the work! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 hj-mistral reacted with heart emoji All reactions ❤️ 1 reaction Copy link mergify bot commented Jul 19, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @hj-mistral . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 19, 2025 mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 19, 2025 mergify bot removed
the needs-rebase label Jul 19, 2025 hj-mistral force-pushed the hj-align-kernel branch
from 623f56f to 86466d7 Compare July 19, 2025 13:48 hj-mistral requested review from hmellor , jeejeelee , DarkLight1337 and ywang96 as code owners July 19, 2025 13:48 44 hidden items Load more… mgoin added moe and removed speculative-decoding ci/build v1 multi-modality Related to multi-modality (#4194) tool-calling llama Related to Llama models qwen Related to Qwen models labels Jul 19, 2025 fix … a5dfc09 Signed-off-by: Himanshu Jaju <[email protected]> Copy link Contributor Author hj-mistral commented Jul 21, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . That's a great point and answers my question as well. It is good to see the e2e speedup at least (and a note that FP8 performance looks off..) Don't worry about the DCO as we can resolve it manually before merge. It looks like there are a few related failures in the kernel tests I fixed my incorrect merge, but unsure how to fix the v1-test failure. Seems just an infra error? [2025-07-21T12:56:57Z] Running command git clone --filter=blob:none --quiet https://github.com/robertgshaw2-neuralmagic/lm-evaluation-harness.git /tmp/pip-req-build-o61noco_
[2025-07-21T12:56:58Z] WARNING: Did not find branch or tag 'streaming-api', assuming revision or ref.
[2025-07-21T12:56:58Z] Running command git checkout -q streaming-api
[2025-07-21T12:56:58Z] error: pathspec 'streaming-api' did not match any file(s) known to git 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member mgoin commented Jul 21, 2025 Yeah the CI infra is just off there and we resolved on main, will request a force merge All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details simon-mo merged commit 0ec82ed into vllm-project : main Jul 21, 2025 96 of 98 checks passed Uh oh! There was an error while loading. Please reload this page . github-project-automation bot moved this to Done in Structured Output Jul 21, 2025 github-project-automation bot moved this to Done in Tool Calling Jul 21, 2025 hj-mistral deleted the hj-align-kernel branch July 21, 2025 18:26 Copy link Member tdoublep commented Jul 22, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . The changes from this PR are causing illegal memory accesses for me. If I deploy with commit before this PR was merged 005ae9be6c22dfa2c2c5580b50b41e67faee4a87 : $ VLLM_USE_V1=1 VLLM_ATTENTION_BACKEND=FLASHINFER vllm serve ibm-granite/granite-4.0-tiny-preview --no-enable-prefix-caching
...
INFO: Started server process [604208]
INFO: Waiting for application startup.
INFO: Application startup complete. Whereas, if I deploy at commit after this PR was merged 0ec82edda59aaf5cf3b07aadf4ecce1aa1131add : $ VLLM_USE_V1=1 VLLM_ATTENTION_BACKEND=FLASHINFER vllm serve ibm-granite/granite-4.0-tiny-preview --no-enable-prefix-caching
...
File "/home/zrltpa/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 1230, in torch_vllm_inplace_fused_experts
torch.ops.vllm.inplace_fused_experts(**kwargs)
File "/home/zrltpa/miniforge3/envs/dev-env/lib/python3.12/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zrltpa/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 1020, in inplace_fused_experts
fused_experts_impl(hidden_states, w1, w2, topk_weights, topk_ids, True,
File "/home/zrltpa/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 1484, in fused_experts_impl
invoke_fused_moe_kernel(qcurr_hidden_states,
File "/home/zrltpa/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 604, in invoke_fused_moe_kernel
fused_moe_kernel[grid](
File "/home/zrltpa/miniforge3/envs/dev-env/lib/python3.12/site-packages/triton/runtime/jit.py", line 347, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zrltpa/miniforge3/envs/dev-env/lib/python3.12/site-packages/triton/runtime/jit.py", line 591, in run
kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata,
File "/home/zrltpa/miniforge3/envs/dev-env/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 529, in __call__
self.launch(gridX, gridY, gridZ, stream, function, self.launch_cooperative_grid, global_scratch, *args)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered Could we perhaps revert the changes from this PR until we figure out what is going on here? cc @mgoin @tlrmchlsmth This should have been caught by the CI tests...looking into what happened. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member tdoublep commented Jul 22, 2025 Before PR: python -m pytest tests/models/language/generation/test_hybrid.py::test_models[5-64-ibm-granite/granite-4.0-tiny-preview]
...
1 passed, 12 warnings in 69.55s (0:01:09) After PR: $ python -m pytest tests/models/language/generation/test_hybrid.py::test_models[5-64-ibm-granite/granite-4.0-tiny-preview]
...
FAILED tests/models/language/generation/test_hybrid.py::test_models[5-64-ibm-granite/granite-4.0-tiny-preview] - RuntimeError: Triton Error [CUDA]: operation not supported on global/shared address space
ERROR tests/models/language/generation/test_hybrid.py::test_models[5-64-ibm-granite/granite-4.0-tiny-preview] - RuntimeError: CUDA error: operation not supported on global/shared address space All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member tdoublep commented Jul 22, 2025 OK the reason it passes in CI is that vLLM bumped torch version which in turn bumped Triton version to 3.3.1. That seems to resolve the error that I am seeing. Still a bit weird though? Illegal memory access in 3.3.0 but works fine in 3.3.1? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . zixi-qi pushed a commit
to zixi-qi/vllm
that referenced
this pull request Jul 23, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 41d76db Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: qizixi <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 98e2e2c Signed-off-by: Himanshu Jaju <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 8954857 Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: avigny <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 8944e23 Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 15e1cba Signed-off-by: Himanshu Jaju <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 0865b8e Signed-off-by: Himanshu Jaju <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 885137a Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … a6ae1b9 Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 7672862 Signed-off-by: Himanshu Jaju <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … 92ef410 Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … c1bb8c1 Signed-off-by: Himanshu Jaju <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [perf] Speed up align sum kernels ( vllm-project#21079 ) … c6cb0c5 Signed-off-by: Himanshu Jaju <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:25
|
6d0734c562e759fdb7076d762222b3881e62ab1f
|
https://github.com/vllm-project/vllm/pull/20645
| true | true | false | true |
LM_EVAL: lm_eval, lm_eval, lm_eval | PERF: latency, latency, latency | TEST: test, test, testing
|
Copy link Contributor kaixih commented Jul 8, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . For this PR , Flashinfer introduces a new backend for block-wise scaled FP8. This PR adds support for that backend. cc. @kushanam @wenscarl @pavanimajety Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions kaixih requested review from mgoin , robertgshaw2-redhat and tlrmchlsmth as code owners July 8, 2025 21:54 Copy link github-actions bot commented Jul 8, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jul 8, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @kaixih , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates a new Flashinfer backend to support block-wise scaled FP8 quantization for Mixture-of-Experts (MoE) layers. The primary goal is to leverage Flashinfer's optimized kernels to potentially enhance the performance of FP8 MoE computations within vLLM, introducing a specialized execution path for compatible configurations. Highlights Flashinfer Integration : Introduced flashinfer.fused_moe as a new dependency and added a flashinfer_moe_enabled flag within the Fp8LinearMethod class to conditionally activate the new backend. Weight Preprocessing for Compatibility : Implemented a new helper function _swap_w13_to_w31 and integrated its application during the process_weights_after_loading phase. This is crucial for reordering weights ( w13_weight and w13_weight_scale_inv ) to ensure compatibility with Flashinfer's expected activation layout. Conditional MoE Execution Path : Modified the apply method in Fp8LinearMethod to introduce a new execution branch. When flashinfer_moe_enabled is true, vLLM's default expert selection logic is bypassed, and the call is delegated to Flashinfer's fused_moe.trtllm_fp8_block_scale_moe for the core FP8 MoE computation. Backend-Specific Constraints and Input Handling : Added several assertions within the Flashinfer MoE execution path to enforce specific requirements, such as block_quant , renormalize , use_grouped_topk , sigmoid scoring, silu activation, and specific constraints on top_k , global_num_experts , and num_expert_group . Additionally, input hidden states are now quantized using per_token_group_quant_fp8 before being passed to the Flashinfer kernel. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions gemini-code-assist bot reviewed Jul 8, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces support for a new Flashinfer MoE backend for block-wise scaled FP8 quantization, primarily in vllm/model_executor/layers/quantization/fp8.py . A critical bug exists in weight processing, where swapped weights are not persisted. Additionally, assert statements should be replaced with proper error handling, and the new backend should be made configurable. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/fp8.py Outdated Comment on lines 656 to 707 w13_weight = _swap_w13_to_w31(layer.w13_weight.data) w13_weight_scale_inv = _swap_w13_to_w31(layer.w13_weight_scale_inv.data) w2_weight = layer.w2_weight.data w2_weight_scale_inv = layer.w2_weight_scale_inv.data Copy link Contributor gemini-code-assist bot Jul 8, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The swapped weights and scales obtained from _swap_w13_to_w31 are assigned to local variables but not persisted back to the layer 's parameters. This will cause the apply method to use the original, un-swapped weights, leading to incorrect results. Ensure the processed weights are copied back to the layer's parameters using param.data.copy_(swapped_tensor) . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/fp8.py Outdated Comment on lines 937 to 1011 assert self.block_quant assert (renormalize and use_grouped_topk and scoring_func == 'sigmoid' and custom_routing_function == None) assert top_k <= global_num_experts assert top_k <= 8 assert topk_group <= 4 assert global_num_experts > num_expert_group assert global_num_experts % num_expert_group == 0 assert global_num_experts % 4 == 0 assert top_k < (topk_group * global_num_experts / num_expert_group) assert activation == "silu" Copy link Contributor gemini-code-assist bot Jul 8, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Replace assert statements with explicit if conditions and raise ValueError or NotImplementedError exceptions. This ensures that these checks are always active, even when assertions are disabled in production environments. if not self . block_quant : raise ValueError ( "Flashinfer MoE backend requires block quantization." ) if not ( renormalize and use_grouped_topk and scoring_func == 'sigmoid' and custom_routing_function is None ): raise NotImplementedError ( "Flashinfer MoE backend has specific routing requirements: " "renormalize=True, use_grouped_topk=True, scoring_func='sigmoid', " "and no custom_routing_function." ) if not top_k <= 8 : raise ValueError ( f"Unsupported top_k= { top_k } , must be <= 8." ) if not topk_group <= 4 : raise ValueError ( f"Unsupported topk_group= { topk_group } , must be <= 4." ) if not ( global_num_experts > num_expert_group and global_num_experts % num_expert_group == 0 and global_num_experts % 4 == 0 ): raise ValueError ( f"Unsupported expert grouping: { global_num_experts = } , { num_expert_group = } " ) if not top_k < ( topk_group * global_num_experts / num_expert_group ): raise ValueError ( "top_k must be less than (topk_group * num_experts_per_group)." ) if activation != "silu" : raise NotImplementedError ( f"Flashinfer MoE backend only supports 'silu' activation, but got { activation } ." ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/fp8.py Outdated @@ -449,6 +454,7 @@ def __init__(self, quant_config: Fp8Config): self.quant_config = quant_config self.block_quant = self.quant_config.weight_block_size is not None self.flashinfer_moe_enabled = True Copy link Contributor gemini-code-assist bot Jul 8, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Consider making flashinfer_moe_enabled configurable via an environment variable or model configuration parameter to facilitate debugging and performance comparisons. This would allow users to easily switch between backends. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author kaixih commented Jul 10, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Here is the lm_eval results after enabling the new moe backend: Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.968 ± 0.0079 strict-match 5 exact_match ↑ 0.962 ± 0.0086 To repro: pip install lm_eval[api]==0.4.8 export VLLM_WORKER_MULTIPROC_METHOD= " spawn " export VLLM_USE_V1= " 1 " export VLLM_USE_STANDALONE_COMPILE= " 0 " export VLLM_USE_FLASHINFER_MOE_FP8= " 1 " model_dir= < your ckpts of DeepSeek-R1- 0528> model_args= " model= ${model_dir} ,pretrained= ${model_dir} ,trust_remote_code=True,tensor_parallel_size=8,enable_expert_parallel=True,enforce_eager=False,max_model_len=2048 " lm_eval --model vllm --model_args $model_args --gen_kwargs temperature=0.0 --limit 500 --trust_remote_code --tasks gsm8k --num_fewshot 5 --batch_size 200 👍 2 mgoin and pavanimajety reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Jul 11, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @kaixih . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 11, 2025 kaixih force-pushed the kaixih/flashinfer-moe-bs-fp8 branch
2 times, most recently
from 6229f18 to 567d6ae Compare July 11, 2025 23:17 mergify bot removed
the needs-rebase label Jul 11, 2025 kaixih force-pushed the kaixih/flashinfer-moe-bs-fp8 branch
from 567d6ae to 85ccae5 Compare July 11, 2025 23:32 support flashinfer moe blockscale fp8 … 644d108 Signed-off-by: kaixih <[email protected]> kaixih force-pushed the kaixih/flashinfer-moe-bs-fp8 branch
from 85ccae5 to 644d108 Compare July 11, 2025 23:58 Minor … 44d86bb Signed-off-by: kaixih <[email protected]> Copy link Contributor Author kaixih commented Jul 12, 2025 These kernels are primarily beneficial in low-latency scenarios, so I also ran some latency benchmarks. The results are shown below. The flashinfer kernels can bring ~32% perf improvement for a DSR1 model on 8xB200 GPUs. # default:
Avg latency: 22.061138840367253 seconds
# flashinfer:
Avg latency: 15.51937770833271 seconds To repro: export VLLM_WORKER_MULTIPROC_METHOD= " spawn " export VLLM_USE_V1= " 1 " export VLLM_USE_STANDALONE_COMPILE= " 0 " export VLLM_USE_FLASHINFER_MOE_FP8= " 0 " # or "1" for flashinfer model_dir= < your ckpts of DeepSeek-R1- 0528> python benchmarks/benchmark_latency.py --model= $model_dir --output-len=1024 --tensor-parallel-size=8 --enable-expert-parallel --input-len=128 --trust_remote_code --max-model-len=2048 --batch-size=1 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . kaixih changed the title [Draft] Add Flashinfer MoE blockscale fp8 backend [NVIDIA] Add Flashinfer MoE blockscale fp8 backend Jul 12, 2025 pavanimajety reviewed Jul 13, 2025 View reviewed changes vllm/model_executor/layers/fused_moe/fused_moe.py Outdated Comment on lines 1067 to 1068 def flashinfer_fused_moe_fp8(router_logits: torch.Tensor, e_score_correction_bias: torch.Tensor, Copy link Contributor pavanimajety Jul 13, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Use flashinfer_fused_moe_blockscale_fp8 to differentiate between other moe variants in FI Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor pavanimajety Jul 13, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment also add assert fi_fused_moe is not None Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author kaixih Jul 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Done. Thx. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions kaixih added 2 commits July 14, 2025 16:42 Address comments … a2b14c6 Signed-off-by: kaixih <[email protected]> Formatting … aa634a6 Signed-off-by: kaixih <[email protected]> mgoin changed the title [NVIDIA] Add Flashinfer MoE blockscale fp8 backend [NVIDIA] Add Flashinfer MoE blockscale fp8 backend for low latency Jul 16, 2025 Update API … 7ce56eb Signed-off-by: kaixih <[email protected]> Copy link Contributor Author kaixih commented Jul 16, 2025 I’ve just updated the API call sites to accommodate the latest FlashInfer changes, which are recommended for improved robustness. I’d suggest testing the code with the ToT version of flashinfer or any release after 0.2.8. 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Jul 18, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @kaixih . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 18, 2025 mgoin added 2 commits July 18, 2025 09:32 Merge branch 'main' into kaixih/flashinfer-moe-bs-fp8 … 8f6aa2f Signed-off-by: mgoin <[email protected]> Refactor to use flashinfer wrapper for lazy import … 2e61e91 Signed-off-by: mgoin <[email protected]> mgoin reviewed Jul 18, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Is it right that TP is not supported, only EP? I think we should assert if so I see this error with TP RuntimeError: Worker failed with error 'vllm::flashinfer_fused_moe_blockscale_fp8() Expected a value of type 'int' for argument 'num_expert_group' but instead found type 'NoneType'.
Position: 9
Value: None
Declaration: vllm::flashinfer_fused_moe_blockscale_fp8(Tensor router_logits, Tensor e_score_correction_bias, Tensor x, Tensor w13_weight, Tensor w13_weight_scale_inv, Tensor w2_weight, Tensor w2_weight_scale_inv, SymInt global_num_experts, SymInt top_k, SymInt num_expert_group, SymInt topk_group, SymInt intermediate_size_per_partition, SymInt expert_offset, SymInt local_num_experts, SymInt[] block_shape, float routed_scaling=1., SymInt tile_tokens_dim=8, SymInt routing_method_type=2) -> Tensor
Cast error details: Unable to cast Python instance of type <class 'NoneType'> to C++ type '?' (#define PYBIND11_DETAILED_ERROR_MESSAGES or compile in debug mode for details)', please check the stack trace above for the root cause Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/quantization/fp8.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/layers/quantization/fp8.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/layers/fused_moe/fused_moe.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . mergify bot removed
the needs-rebase label Jul 18, 2025 kaixih added 2 commits July 18, 2025 19:45 Address comments … 79ef02e Signed-off-by: kaixih <[email protected]> Format … 44b0d24 Signed-off-by: kaixih <[email protected]> mgoin added performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed labels Jul 18, 2025 mgoin changed the title [NVIDIA] Add Flashinfer MoE blockscale fp8 backend for low latency [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low latency Jul 18, 2025 Copy link Contributor Author kaixih commented Jul 18, 2025 Is it right that TP is not supported, only EP? I think we should assert if so I think it supports it. I did a quick check and it looked good. Can you double check what is in your num_expert_group ? Are you testing a DS model? Here is what I used for quick test and you can turn on/off the enable_expert_parallel . All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author kaixih commented Jul 18, 2025 Also checked accuracy with TP=8: INFO:lm_eval.loggers.evaluation_tracker:Output path not provided, skipping saving results aggregated vllm (model=/model/models--deepseek-ai--DeepSeek-R1-0528/snapshots/4236a6af538feda4548eca9ab308586007567f52/,pretrained=/model/models--deepseek-ai--DeepSeek-R1-0528/snapshots/4236a6af538feda4548eca9ab308586007567f52/,trust_remote_code=True,tensor_parallel_size=8,enable_expert_parallel=False,enforce_eager=False,max_model_len=2048,trust_remote_code=True), gen_kwargs: (temperature=0.0), limit: 500.0, num_fewshot: 5, batch_size: 200 Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.964 ± 0.0083 strict-match 5 exact_match ↑ 0.958 ± 0.0090 To repro: export VLLM_WORKER_MULTIPROC_METHOD="spawn"
export VLLM_USE_V1="1"
export VLLM_USE_STANDALONE_COMPILE="0"
export VLLM_USE_FLASHINFER_MOE_FP8="1"
model_dir="/model/models--deepseek-ai--DeepSeek-R1-0528/snapshots/4236a6af538feda4548eca9ab308586007567f52/"
model_args="model=${model_dir},pretrained=${model_dir},trust_remote_code=True,tensor_parallel_size=8,enable_expert_parallel=False,enforce_eager=False,max_model_len=2048"
lm_eval --model vllm --model_args $model_args --gen_kwargs temperature=0.0 --limit 500 --trust_remote_code --tasks gsm8k --num_fewshot 5 --batch_size 200 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin approved these changes Jul 18, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, thank you! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin enabled auto-merge (squash) July 18, 2025 21:22 Hide details View details vllm-bot merged commit 6d0734c into vllm-project : main Jul 19, 2025 80 of 83 checks passed Uh oh! There was an error while loading. Please reload this page . hj-mistral pushed a commit
to hj-mistral/vllm
that referenced
this pull request Jul 19, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … d195bb6 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … ac5c103 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 2919908 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: avigny <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 71dd173 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 36f7621 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 268cfab …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 392b3e9 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 0b6eb26 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 51d92ce …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … 4970555 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … d42a70b …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [NVIDIA] Add SM100 Flashinfer MoE blockscale fp8 backend for low late… … fd638e0 …ncy ( vllm-project#20645 )
Signed-off-by: kaixih <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:29
|
dcc6cfb991cd76369aad96e04424f29c8fecdbd8
|
https://github.com/vllm-project/vllm/pull/21193
| true | true | true | true |
LM_EVAL: lm_eval, lm_eval, lm_eval | PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, vllm serve, Serving | TEST: Test, Test, test
|
Copy link Contributor varun-sundar-rabindranath commented Jul 18, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose Tweak the num_warps and NUM_STAGES (num pipeline stages for prefetching) values of the kernel. Local micro-benchmark numbers: main: Benchmark: E=256, T=2048, H=7168, group_size=128, repeat=200
tokens=4: quant_silu_mul 0.030ms
tokens=8: quant_silu_mul 0.056ms
tokens=16: quant_silu_mul 0.106ms
tokens=32: quant_silu_mul 0.204ms
tokens=64: quant_silu_mul 0.402ms
tokens=128: quant_silu_mul 0.799ms
tokens=256: quant_silu_mul 1.579ms
tokens=384: quant_silu_mul 2.366ms
tokens=512: quant_silu_mul 3.148ms
tokens=1024: quant_silu_mul 6.272ms
tokens=2048: quant_silu_mul 12.522ms This PR: Benchmark: E=256, T=2048, H=7168, group_size=128, repeat=200
tokens=4: quant_silu_mul 0.017ms
tokens=8: quant_silu_mul 0.032ms
tokens=16: quant_silu_mul 0.057ms
tokens=32: quant_silu_mul 0.108ms
tokens=64: quant_silu_mul 0.211ms
tokens=128: quant_silu_mul 0.417ms
tokens=256: quant_silu_mul 0.830ms
tokens=384: quant_silu_mul 1.234ms
tokens=512: quant_silu_mul 1.639ms
tokens=1024: quant_silu_mul 3.254ms
tokens=2048: quant_silu_mul 6.514ms Note: micro-benchmarking script from https://github.com/tlrmchlsmth/ptgq_fp8 E2E Perf server command : VLLM_ALL2ALL_BACKEND="deepep_low_latency" VLLM_USE_DEEP_GEMM=1 canhazgpu run -g 2 -- vllm serve Qwen/Qwen3-30B-A3B-FP8 --trust-remote-code --enable-expert-parallel --data-parallel-size 2 --port 9010 --no-enable-prefix-caching benchmark command : python3 ./benchmarks/benchmark_serving.py --model Qwen/Qwen3-30B-A3B-FP8 --dataset-name sharegpt --port 9010 --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json Methodology: Start the server and execute the benchmark command 3 times. Report the best Total Token Throughput numbers. main : ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 32.44
Total input tokens: 217393
Total generated tokens: 201847
Request throughput (req/s): 30.83
Output token throughput (tok/s): 6222.53
Total Token throughput (tok/s): 12924.31
---------------Time to First Token----------------
Mean TTFT (ms): 6470.31
Median TTFT (ms): 6734.54
P99 TTFT (ms): 12538.94
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 192.93
Median TPOT (ms): 76.87
P99 TPOT (ms): 773.24
---------------Inter-token Latency----------------
Mean ITL (ms): 61.06
Median ITL (ms): 35.02
P99 ITL (ms): 778.17
================================================== This PR: ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 30.64
Total input tokens: 217393
Total generated tokens: 201847
Request throughput (req/s): 32.64
Output token throughput (tok/s): 6587.82
Total Token throughput (tok/s): 13683.03
---------------Time to First Token----------------
Mean TTFT (ms): 6416.49
Median TTFT (ms): 6604.24
P99 TTFT (ms): 11718.61
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 174.51
Median TPOT (ms): 66.36
P99 TPOT (ms): 776.26
---------------Inter-token Latency----------------
Mean ITL (ms): 54.63
Median ITL (ms): 27.40
P99 ITL (ms): 779.23
================================================== Test Plan local testing : pytest -s tests/kernels/moe/test_silu_mul_fp8_quant_deep_gemm.py e2e testing : server command : VLLM_ALL2ALL_BACKEND="deepep_low_latency" VLLM_USE_DEEP_GEMM=1 canhazgpu run -g 2 -- vllm serve Qwen/Qwen3-30B-A3B-FP8 --trust-remote-code --enable-expert-parallel --data-parallel-size 2 --port 9010 --no-enable-prefix-caching lm_eval command : lm_eval --model local-completions --tasks gsm8k --model_args model=Qwen/Qwen3-30B-A3B-FP8,base_url=http://127.0.0.1:9010/v1/completions,num_concurrent=30,max_retries=3 --limit 100 Test Result tests/kernels/moe/test_silu_mul_fp8_quant_deep_gemm.py test passes locally lm_eval output : |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.84|± |0.0368|
| | |strict-match | 5|exact_match|↑ | 0.95|± |0.0219| (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions num_warps & num_stages tweak … f134464 Signed-off-by: Varun Sundar Rabindranath <[email protected]> Copy link github-actions bot commented Jul 18, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jul 18, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces performance optimizations to the silu_mul_fp8_quant_deep_gemm Triton kernel. The changes involve switching from a manual while loop to tl.range to enable software pipelining, and tuning the num_warps and NUM_STAGES parameters. The code modifications are correct and follow Triton best practices for performance. The provided micro-benchmarks demonstrate a significant performance improvement, which validates the tuning choices. The changes are well-contained and improve the efficiency of the kernel as intended. I have no further comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth approved these changes Jul 18, 2025 View reviewed changes tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 18, 2025 tlrmchlsmth enabled auto-merge (squash) July 18, 2025 16:36 simon-mo disabled auto-merge July 19, 2025 06:09 Hide details View details simon-mo merged commit dcc6cfb into vllm-project : main Jul 19, 2025 78 of 79 checks passed Uh oh! There was an error while loading. Please reload this page . hj-mistral pushed a commit
to hj-mistral/vllm
that referenced
this pull request Jul 19, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 58ad0a6 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … d07d2ed …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> tlrmchlsmth mentioned this pull request Jul 24, 2025 [RFC]: Data Parallel Attention and Expert Parallel MoEs #16037 Open 37 tasks avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 5ee1aab …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: avigny <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 5070713 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … c87a2d4 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 60013fe …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 7a09a5b …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 999d5e4 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … ef2c87e …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … cbc3340 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … 463fcc1 …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Kernel][Performance] Tweak MoE Batched silu_mul_fp8_quant_deep_gemm … … ec28a1c …kernel ( vllm-project#21193 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:33
|
8a4e5c5f3c1d39e924e48a87c9cc6cf382aa3532
|
https://github.com/vllm-project/vllm/pull/20906
| false | true | true | true |
PERF: TTFT, TTFT, Optimization | SERVING: online_serving/disaggregated_serving_p2p_nccl_xpyd/disagg_proxy_p2p_nccl_xpyd.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . examples/online_serving | TEST: test, CI, CI
|
Copy link Contributor Abatom commented Jul 14, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Enhance Performance and code readability for P2pNcclConnector, follow-up #18242 . The design document and user manual can be found in docs/design/v1/p2p_nccl_connector.md . The KVCache sender offloads the KVCache extraction and reshape operations to a dedicated sending thread, thereby reducing the load on the main thread and lowering the TTFT. Fix the issue of occasional garbled output when receiving a temporarily created empty tensor and ncclRecv are not in the same stream for "GET". Optimize the proxy so that when there are no active requests, instances P or D can be automatically removed. Handling abnormal crashes will be addressed in a follow-up PR; in PR [V1][P/D]Support automatic instance removal after crash for P2pNcclConnector #20006 I attempted to cover this scenario and found that removal is far more complex than scaling up. Resolving the P2pNcclConnector crash caused by PR [KVConnector] Aggregate finished requests on the scheduler #19555 . In PR [V1][P/D]Enhance Performance for P2pNcclConnector #20074 , I attempted to support asynchronous loading of the KV cache and release the KV cache after sending is completed; however, performance actually degraded. Once this PR is merged, I will revisit these two features on top of it. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Abatom added 8 commits July 13, 2025 22:57 reshape … 4c9bbfd Signed-off-by: Abatom <[email protected]> Merge branch 'main' into send 6efd67d bool … 8144527 Signed-off-by: Abatom <[email protected]> bool … 2f71a7d Signed-off-by: Abatom <[email protected]> staticmethod … 31cff5c Signed-off-by: Abatom <[email protected]> code readability … d3c9ee0 Signed-off-by: Abatom <[email protected]> proxy … d72dfb9 Signed-off-by: Abatom <[email protected]> mod doc … 8ed454d Signed-off-by: Abatom <[email protected]> Abatom requested a review
from hmellor as a code owner July 14, 2025 07:07 mergify bot added
the documentation Improvements or additions to documentation label Jul 14, 2025 Copy link github-actions bot commented Jul 14, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jul 14, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @Abatom , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the performance and code readability of the P2pNcclConnector. It introduces optimizations to reduce main thread load, fixes a garbled output issue, and optimizes proxy resource management. Additionally, internal methods have been refactored to improve code quality. Highlights Performance Enhancement : Offloads KVCache extraction and reshape operations to a dedicated sending thread, reducing the load on the main thread and lowering TTFT. Bug Fix : Resolves an issue causing occasional garbled output when receiving a temporarily created empty tensor, ensuring ncclRecv operations are synchronized. Proxy Optimization : Improves proxy functionality by automatically removing P or D instances when there are no active requests, optimizing resource utilization. Code Modernization : Refactors internal methods to improve code readability and maintainability. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions gemini-code-assist bot reviewed Jul 14, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces significant performance and readability enhancements for the P2pNcclConnector . Key changes include offloading KVCache extraction to a dedicated sending thread, fixing a race condition with ncclRecv by ensuring operations are on the same CUDA stream, and implementing automatic cleanup of stale proxy instances. The refactoring of methods from private to public in p2p_nccl_engine.py also improves the API design. Overall, the changes are well-aligned with the PR's objectives. I've identified one critical bug related to a method rename that was missed, which would cause a runtime error. I've also included a few medium-severity suggestions to improve code clarity and maintainability. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/distributed/kv_transfer/kv_connector/v1/p2p/p2p_nccl_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . examples/online_serving/disaggregated_serving_p2p_nccl_xpyd/disagg_proxy_p2p_nccl_xpyd.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . examples/online_serving/disaggregated_serving_p2p_nccl_xpyd/disagg_proxy_p2p_nccl_xpyd.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/distributed/kv_transfer/kv_connector/v1/p2p/p2p_nccl_engine.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Abatom added 10 commits July 14, 2025 15:21 have_sent_tensor_id … ca02a98 Signed-off-by: Abatom <[email protected]> mod log … 4441b2c Signed-off-by: Abatom <[email protected]> SendQueueItem … 8adac0c Signed-off-by: Abatom <[email protected]> console … 416e6b7 Signed-off-by: Abatom <[email protected]> console … 81b2f0b Signed-off-by: Abatom <[email protected]> format … 847282b Signed-off-by: Abatom <[email protected]> PUT_ASYNC … f97ecf9 Signed-off-by: Abatom <[email protected]> mod doc … 5eb5edc Signed-off-by: Abatom <[email protected]> format … b85043e Signed-off-by: Abatom <[email protected]> SPDX … 8126ed0 Signed-off-by: Abatom <[email protected]> Abatom changed the title [WIP][V1][P/D]Enhance Performance and code readability for P2pNcclConnector [V1][P/D]Enhance Performance and code readability for P2pNcclConnector Jul 14, 2025 Abatom added 5 commits July 15, 2025 16:45 mod doc … a5fcacd Signed-off-by: Abatom <[email protected]> Merge branch 'main' into send 4256b01 no_compile_layers … 6af393a Signed-off-by: Abatom <[email protected]> format … cd11f33 Signed-off-by: Abatom <[email protected]> mod doc … 113993c Signed-off-by: Abatom <[email protected]> KuntaiDu approved these changes Jul 16, 2025 View reviewed changes Copy link Collaborator KuntaiDu left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 Abatom reacted with heart emoji All reactions ❤️ 1 reaction simon-mo added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 17, 2025 Hide details View details simon-mo merged commit 8a4e5c5 into vllm-project : main Jul 17, 2025 80 of 82 checks passed Uh oh! There was an error while loading. Please reload this page . hj-mistral pushed a commit
to hj-mistral/vllm
that referenced
this pull request Jul 19, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … cc76e0b vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … 2f0aa79 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … aef48d4 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: avigny <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … e9ea31d vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … be0e12d vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … e2e9f64 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … a3af660 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … e42182b vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … 65558a4 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … f189c1c vllm-project#20906 )
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> Abatom mentioned this pull request Aug 22, 2025 [Bugfix][V1][P/D]Fix the issue where repeated requests for the same input produce abnormal outputs for P2pNcclConnector #23403 Merged 4 tasks epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … 67bdc76 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [V1][P/D]Enhance Performance and code readability for P2pNcclConnector ( … bc17546 vllm-project#20906 )
Signed-off-by: Abatom <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:37
|
c0569dbc82b5e945a77878190114d1b68027828b
|
https://github.com/vllm-project/vllm/pull/20725
| true | true | true | true |
LM_EVAL: lm-eval, lm_eval, gsm8k | PERF: Throughput, improvement | SERVING: vllm serve, vllm serve, serve | TEST: Test, Test, test
|
Copy link Contributor varun-sundar-rabindranath commented Jul 10, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Purpose Perform weight-application and reduction inside the TritonExperts and DeepGemmExperts. This helps save memory. For example please refer to #20228 Changes: Add topk_weights and apply_router_weight_on_input args to FusedMoEPermuteExpertsUnpermute::apply functions - so the implementations can perform topk-weight application if they wish to. Adjust workspace reuse in TritonExperts and DeepGemmExperts to accommodate weight-application and reduction. Test Plan pytest : pytest -s tests/kernels/moe/test_modular_kernel_combinations.py e2e tests: Using TritonOrDeepGemmExperts VLLM_ALL2ALL_BACKEND="deepep_high_throughput" VLLM_USE_DEEP_GEMM=1 canhazgpu run -g 2 -- vllm serve Qwen/Qwen3-30B-A3B-FP8 --trust-remote-code --enable-expert-parallel --data-parallel-size 2 --port 9010 Using only TritonExperts VLLM_ALL2ALL_BACKEND="deepep_high_throughput" VLLM_USE_DEEP_GEMM=0 canhazgpu run -g 2 -- vllm serve Qwen/Qwen3-30B-A3B-FP8 --trust-remote-code --enable-expert-parallel --data-parallel-size 2 --port 9010 lm-eval command : lm_eval --model local-completions --tasks gsm8k --model_args model=Qwen/Qwen3-30B-A3B-FP8,base_url=http://127.0.0.1:9010/v1/completions,num_concurrent=30,max_retries=3 --limit 100 Test Result |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.86|± |0.0349|
| | |strict-match | 5|exact_match|↑ | 0.92|± |0.0273| |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.80|± |0.0402|
| | |strict-match | 5|exact_match|↑ | 0.91|± |0.0288| (Optional) Documentation Update Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions varun-sundar-rabindranath requested review from tlrmchlsmth and WoosukKwon as code owners July 10, 2025 00:41 Copy link github-actions bot commented Jul 10, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . varun-sundar-rabindranath marked this pull request as draft July 10, 2025 00:42 Copy link mergify bot commented Jul 10, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @varun-sundar-rabindranath . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 10, 2025 gemini-code-assist bot reviewed Jul 10, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @varun-sundar-rabindranath , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the FusedMoEPermuteExpertsUnpermute module by enabling weight application and reduction within the fused operation. This is achieved through the introduction of the TopKWeightAndReduce abstraction, which allows implementations to specify how the finalize() method should behave. The PR also standardizes weight application and reduction implementations, improving code organization and maintainability. Highlights MoE Reduction : Adds the ability to perform MoE reduction within the FusedMoEPermuteExpertsUnpermute operation, allowing for memory footprint reduction. TopKWeightAndReduce Abstraction : Introduces the TopKWeightAndReduce abstraction to standardize weight application and reduction implementations, providing flexibility in the finalize() method. Standardization : Consolidates all weight-application and reduction implementations into a single location for better maintainability. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions gemini-code-assist bot reviewed Jul 10, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request introduces a significant and well-designed refactoring to make MoE kernels more modular. By abstracting the weight application and reduction logic into a WeightAndReduce class, it allows different FusedMoEPermuteExpertsUnpermute implementations to either perform this step themselves or delegate it to the finalize stage. This is a great improvement for code clarity, reusability, and will help in reducing memory footprint as intended. The changes are well-implemented across the affected files. My feedback focuses on a few areas where code can be made more concise and consistent with the established API contracts. These are minor points in an otherwise excellent PR. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/fused_moe/batched_triton_or_deep_gemm_moe.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/layers/fused_moe/deep_gemm_moe.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/layers/fused_moe/triton_deep_gemm_moe.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . varun-sundar-rabindranath force-pushed the varun/experts-reduce branch
from e797a42 to 27306fa Compare July 10, 2025 00:51 mergify bot removed
the needs-rebase label Jul 10, 2025 varun-sundar-rabindranath force-pushed the varun/experts-reduce branch
from 27306fa to 3f1d2da Compare July 10, 2025 19:55 Copy link mergify bot commented Jul 10, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @varun-sundar-rabindranath . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 10, 2025 varun-sundar-rabindranath force-pushed the varun/experts-reduce branch
from 3d3003a to 4389c7a Compare July 11, 2025 01:36 mergify bot removed
the needs-rebase label Jul 11, 2025 varun-sundar-rabindranath changed the title [Misc] Modular Kernel : Add ability to MoE reduce in FusedMoEPermuteExpertsUnpermute [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts & DeepGemmExperts Jul 11, 2025 varun-sundar-rabindranath marked this pull request as ready for review July 11, 2025 02:42 varun-sundar-rabindranath commented Jul 11, 2025 View reviewed changes vllm/model_executor/layers/fused_moe/deep_gemm_moe.py (M_sum, N // 2)) mm2_out = _resize_cache(workspace2, (M_sum, K)) mm2_out = _resize_cache(workspace13, (M_sum, K)) perm_out = _resize_cache(workspace2, (M * num_topk, K)) Copy link Contributor Author varun-sundar-rabindranath Jul 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment rearrage how workspaces are used to make space for perm_out - note that perm_out cannot use workspace13 as workspace13 may be used as the output tensor ( vllm/vllm/model_executor/layers/fused_moe/modular_kernel.py Line 486
in 5923ab9 fused_out = _resize_cache ( workspace13 , fused_out_shape ) ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions varun-sundar-rabindranath commented Jul 11, 2025 View reviewed changes vllm/model_executor/layers/fused_moe/fused_moe.py (num_tokens * top_k_num, N // 2)) intermediate_cache3 = _resize_cache(workspace2, (num_tokens, top_k_num, K)) Copy link Contributor Author varun-sundar-rabindranath Jul 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment rearrage how workspaces are used to make space for intermediate_cache3 - note that intermediate_cache3 cannot use workspace13 as workspace13 may be used as the output tensor Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link mergify bot commented Jul 11, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @varun-sundar-rabindranath . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Jul 11, 2025 varun-sundar-rabindranath force-pushed the varun/experts-reduce branch
from 4389c7a to c5fd979 Compare July 11, 2025 16:56 mergify bot removed
the needs-rebase label Jul 11, 2025 This was referenced Jul 12, 2025 [Kernels][Misc] DeepGemm High-Throughput Optimizations #20228 Closed [Kernel] DeepGemm MoE : Integrate triton permute / unpermute kernels #20903 Merged tlrmchlsmth approved these changes Jul 14, 2025 View reviewed changes vllm/model_executor/layers/fused_moe/topk_weight_and_reduce.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 14, 2025 Varun Sundar Rabindranath added 3 commits July 14, 2025 16:10 do reduction in experts … c9f2001 Signed-off-by: Varun Sundar Rabindranath <[email protected]> fix workspace overallocation … 4d7e07b Signed-off-by: Varun Sundar Rabindranath <[email protected]> TritonExperts opt … 2961f53 Signed-off-by: Varun Sundar Rabindranath <[email protected]> varun-sundar-rabindranath force-pushed the varun/experts-reduce branch
from e369637 to 2961f53 Compare July 14, 2025 16:13 Copy link Collaborator tlrmchlsmth commented Jul 14, 2025 Confirm that without this PR, I cannot run a full sequence length DeepSeekV3 across 16 H200s and with it I see: GPU KV cache size: 236,736 tokens 🎉 1 robertgshaw2-redhat reacted with hooray emoji All reactions 🎉 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth enabled auto-merge (squash) July 14, 2025 18:04 Hide details View details tlrmchlsmth merged commit c0569db into vllm-project : main Jul 14, 2025 68 checks passed Uh oh! There was an error while loading. Please reload this page . py-andy-c pushed a commit
to py-andy-c/vllm
that referenced
this pull request Jul 14, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 5dfb1a9 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> py-andy-c pushed a commit
to py-andy-c/vllm
that referenced
this pull request Jul 14, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … d9b727c … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> patrickvonplaten pushed a commit
to patrickvonplaten/vllm
that referenced
this pull request Jul 15, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 8595ba0 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Patrick von Platen <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 8150275 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 0bee6a6 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: avigny <[email protected]> x22x22 pushed a commit
to x22x22/vllm
that referenced
this pull request Aug 5, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 3eba418 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: x22x22 <[email protected]> Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 813b32a … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 8e72fe1 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 98a3732 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> paulpak58 pushed a commit
to paulpak58/vllm
that referenced
this pull request Aug 13, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 1bb105e … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Paul Pak <[email protected]> taneem-ibrahim pushed a commit
to taneem-ibrahim/vllm
that referenced
this pull request Aug 14, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 7d7f94b … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> diegocastanibm pushed a commit
to diegocastanibm/vllm
that referenced
this pull request Aug 15, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 6c7acc9 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Diego-Castan <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … a202b30 … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Misc] ModularKernel : Perform WeightAndReduce inside TritonExperts &… … 9737a2e … DeepGemmExperts ( vllm-project#20725 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:41
|
22dd9c2730dc1124b9d0ac15fff223d0b8d9020b
|
https://github.com/vllm-project/vllm/pull/20308
| true | true | true | true |
LM_EVAL: lm_eval, lm_eval, lm_eval | PERF: TTFT, TTFT, TTFT | SERVING: Serving, Serving, serving | TEST: test, CI, CI
|
Copy link Contributor jvlunteren commented Jul 1, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR introduces an optimization to the unified triton attention kernel ( #16828 and #19152 ) that enhances prefill attention performance. The key improvement involves reducing the number of tiles processed during the prefill phase by leveraging the causal mask to skip unnecessary computations. This results in more efficient execution, particularly for long prompts. Performance The following results were obtained for meta-llama/Llama-3.1-8B-Instruct on an NVIDIA H100 GPU, by running $ VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1 VLLM_USE_V1=1 python benchmarks/benchmark_latency.py \
--model meta-llama/Llama-3.1-8B-Instruct \
--input-len <input-length> --output-len 4 \
--batch-size <batch-size> for the current triton unified attention kernel, and the updated triton unified attention kernel (this PR). Results for a batch size 1 are shown in the following graph. The input (prompt) length (in tokens) was varied in these experiments across the following values: 500, 1000, 1500, 2000, 4000, 8000, and 16000. The number of warmup iterations and measurement iterations were left at the default values of 10 and 30 respectively. As illustrated in the graph above, this PR improves the performance of the Triton Unified Attention Kernel by approximately 1.75 times for a batch size of 1 and an input length of 16000 tokens. Additional results were collected using benchmark_serving.py , which only includes sequence lengths under 2000 tokens: Current triton unified attention kernel: $ python benchmarks/benchmark_serving.py \
--model meta-llama/Llama-3.1-8B-Instruct \
--dataset-name sharegpt \
--dataset-path ShareGPT_V3_unfiltered_cleaned_split.json
============ Serving Benchmark Result ============
Successful requests: 984
Benchmark duration (s): 22.18
Total input tokens: 210771
Total generated tokens: 195009
Request throughput (req/s): 44.37
Output token throughput (tok/s): 8793.44
Total Token throughput (tok/s): 18297.62
---------------Time to First Token----------------
Mean TTFT (ms): 3874.12
Median TTFT (ms): 3715.54
P99 TTFT (ms): 7060.57
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 88.60
Median TPOT (ms): 51.50
P99 TPOT (ms): 233.82
---------------Inter-token Latency----------------
Mean ITL (ms): 40.26
Median ITL (ms): 25.51
P99 ITL (ms): 239.07
================================================== Updated triton unified attention kernel (this PR): $ python benchmarks/benchmark_serving.py \
--model meta-llama/Llama-3.1-8B-Instruct \
--dataset-name sharegpt \
--dataset-path ShareGPT_V3_unfiltered_cleaned_split.json
============ Serving Benchmark Result ============
Successful requests: 984
Benchmark duration (s): 21.44
Total input tokens: 210460
Total generated tokens: 195875
Request throughput (req/s): 45.90
Output token throughput (tok/s): 9137.19
Total Token throughput (tok/s): 18954.74
---------------Time to First Token----------------
Mean TTFT (ms): 3588.36
Median TTFT (ms): 3478.75
P99 TTFT (ms): 6540.15
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 83.17
Median TPOT (ms): 47.72
P99 TPOT (ms): 220.70
---------------Inter-token Latency----------------
Mean ITL (ms): 38.12
Median ITL (ms): 25.28
P99 ITL (ms): 223.90 Despite the relatively short prompt lengths used in this benchmark, the results still demonstrate a ~3% improvement in throughput, along with over 5% reductions in latency metrics (TTFT, TPOT, and ITL). Correctness V1 FlashAttention : VLLM_USE_V1=1 lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct --tasks gsm8k --num_fewshot 5 --batch_size auto --limit 500
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.794|± |0.0181|
| | |strict-match | 5|exact_match|↑ |0.772|± |0.0188| Updated triton unified attention kernel (this PR): VLLM_USE_V1=1 VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1 lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct --tasks gsm8k --num_fewshot 5 --batch_size auto --limit 500
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.800|± |0.0179|
| | |strict-match | 5|exact_match|↑ |0.784|± |0.0184| How is this performance improvement achieved? The triton unified attention kernel employs a loop that iteratively processes multiple tiles, computing attention locally for each tile and accumulating the results across tiles to form the final output. During prefill processing, a causal mask is applied to each tile to ensure that attention is computed only over past and current tokens. In the current implementation, up to half of the tiles may be fully masked out during processing, resulting in redundant computation and reduced efficiency. This PR addresses the issue by skipping such tiles, ensuring that only those containing unmasked tokens are processed. cc @tdoublep Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions optimization of causal masking for prefill attention … 9641d0f Signed-off-by: Jan van Lunteren <[email protected]> gemini-code-assist bot reviewed Jul 1, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @jvlunteren , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on optimizing the prefill attention phase within the Triton Unified Attention Kernel. The core improvement involves intelligently skipping unnecessary computations for fully masked tiles, leading to enhanced efficiency, especially for processing long input sequences. This results in notable performance gains across various prompt lengths while maintaining model correctness. Highlights Performance Optimization : Implemented an optimization to the Triton Unified Attention Kernel specifically targeting the prefill attention phase. Causal Mask Leveraging : The optimization works by leveraging the causal mask to identify and skip processing of fully masked tiles, thereby reducing redundant computations during prefill. Significant Speedup for Long Prompts : Achieves significant performance improvements for very long prompts, demonstrating up to a 1.75x speedup for 16,000-token inputs on NVIDIA H100 GPUs. General Throughput and Latency Gains : Even for typical serving workloads with shorter prompts, the PR shows measurable gains, including approximately 3% improvement in request throughput and over 5% reductions in Time to First Token (TTFT), Time per Output Token (TPOT), and Inter-token Latency (ITL). Correctness Verified : Correctness benchmarks using lm_eval confirm that the optimization maintains or slightly improves model accuracy metrics, ensuring no regression in output quality. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions gemini-code-assist bot reviewed Jul 1, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This PR optimizes the unified triton attention kernel by reducing the number of tiles processed during the prefill phase, leading to significant performance improvements, especially for long prompts. The change is well-justified by the performance benchmarks. A suggestion has been made to improve the readability of the core calculation for better maintainability. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/attention/ops/triton_unified_attention.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . formatting … 70a6abe Signed-off-by: Jan van Lunteren <[email protected]> Copy link github-actions bot commented Jul 1, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . SageMoore approved these changes Jul 1, 2025 View reviewed changes Copy link Contributor SageMoore left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good @jvlunteren . Thanks for the contribution! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 jvlunteren reacted with thumbs up emoji All reactions 👍 1 reaction tlrmchlsmth reviewed Jul 1, 2025 View reviewed changes vllm/attention/ops/triton_unified_attention.py Outdated Comment on lines 148 to 151 num_blocks = cdiv_fn( tl.minimum( context_len + q_block_local_idx * BLOCK_Q + (BLOCK_M - 1) // num_queries_per_kv + 1, seq_len), BLOCK_SIZE) Copy link Collaborator tlrmchlsmth Jul 1, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Worth adding a comment to explain the optimization? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 jvlunteren reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author jvlunteren Jul 2, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Done! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth approved these changes Jul 1, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Very nice find Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 jvlunteren reacted with thumbs up emoji All reactions 👍 1 reaction added comment … d26316e Signed-off-by: Jan van Lunteren <[email protected]> LucasWilkinson approved these changes Jul 7, 2025 View reviewed changes LucasWilkinson enabled auto-merge (squash) July 7, 2025 14:05 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jul 7, 2025 Hide details View details LucasWilkinson merged commit 22dd9c2 into vllm-project : main Jul 7, 2025 82 checks passed Uh oh! There was an error while loading. Please reload this page . huydhn pushed a commit
to huydhn/vllm
that referenced
this pull request Jul 8, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 1572109 vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> Chen-zexi pushed a commit
to Chen-zexi/vllm
that referenced
this pull request Jul 13, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … d1d442e vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> patrickvonplaten pushed a commit
to patrickvonplaten/vllm
that referenced
this pull request Jul 15, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 182f805 vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]>
Signed-off-by: Patrick von Platen <[email protected]> LyrisZhong pushed a commit
to LyrisZhong/vllm
that referenced
this pull request Jul 23, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 6dd288b vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … dc7e000 vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]>
Signed-off-by: avigny <[email protected]> jvlunteren deleted the jvl-causal-mask-opt branch August 4, 2025 08:40 Pradyun92 pushed a commit
to Pradyun92/vllm
that referenced
this pull request Aug 6, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 7d742bd vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> npanpaliya pushed a commit
to odh-on-pz/vllm-upstream
that referenced
this pull request Aug 6, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … bcda609 vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> jinzhen-lin pushed a commit
to jinzhen-lin/vllm
that referenced
this pull request Aug 9, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 6d70cdb vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> epwalsh pushed a commit
to epwalsh/vllm
that referenced
this pull request Aug 27, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 6f1d223 vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Kernel] Optimize Prefill Attention in Unified Triton Attention Kernel ( … 873623a vllm-project#20308 )
Signed-off-by: Jan van Lunteren <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:44
|
9a3b88328f7e434cac35b90ee463de6689f9a833
|
https://github.com/vllm-project/vllm/pull/19939
| false | true | true | true |
PERF: throughput, latency, Performance Test | SERVING: vllm serve, serve | TEST: test, test, test
|
Copy link Contributor vadiklyutiy commented Jun 21, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results Purpose Speedup of MRoPE prepare inputs. #16881 got stuck for a while. I combined parts of #16881 and #17617 to minimize changes and brings sufficient speedup. MRotaryEmbedding.get_next_input_positions_tensor takes a lot of time. Use numpy to speedup it. Performance Test Result I used vllm serve Qwen/Qwen2.5-VL-3B-Instruct --disable-log-requests --no-enable-prefix-caching as a server and fib benchmark -rps 50 --input-token-distribution uniform 250 300 --output-token-distribution uniform 150 250 --num-of-imgs-per-req 1 --img-ratios-per-req 512x512 -n 1000 --base-url http://localhost:8000 --endpoint v1/chat/completions --backend openai-chat to make workload (send 50 requests per sec with one 512x512 image per request). I decorated GPUModelRunner._prepare_inputs with nvtx to measure the time. _prepare_inputs average time Before: 3.869 ms With this PR: 1.487 ms (speedup 2.6x) For info, with #17617 : 2.511 ms E2E performance improvement on Qwen2.5-VL-3B-Instruct with high load is around 1.5%. cc @imkero @ywang96 @simon-mo Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Speedup MRoPE prepare inputs … c44c815 Signed-off-by: Vadim Gimpelson <[email protected]> vadiklyutiy requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners June 21, 2025 17:54 Copy link github-actions bot commented Jun 21, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gemini-code-assist bot reviewed Jun 21, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Summary of Changes Hello @vadiklyutiy , I'm Gemini Code Assist 1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on a targeted performance enhancement for the MRoPE input preparation within the model executor. By transitioning a critical position calculation from a PyTorch-based approach to a more efficient NumPy implementation, it aims to reduce latency in the _prepare_inputs stage, thereby contributing to overall system throughput and responsiveness. The changes are designed to be minimally invasive while delivering substantial speedups in a key computational bottleneck. Highlights Performance Optimization : This pull request introduces a significant performance optimization for the Multi-Query Rotary Positional Embedding (MRoPE) input preparation, specifically targeting the get_next_input_positions_tensor function which was identified as a bottleneck. NumPy Integration : The core of the optimization involves refactoring the position calculation logic to leverage NumPy for improved efficiency. A new static method, mrope_assign_next_input_positions , has been added to MRotaryEmbedding to perform these calculations directly using NumPy arrays. GPU Model Runner Update : The GPUModelRunner has been updated to utilize this new NumPy-based method. This includes creating a NumPy view of the existing CPU-bound position tensor ( self.mrope_positions_cpu.numpy() ) to allow direct, efficient assignment of calculated positions. Measured Speedup : Performance benchmarks show a 2.59x speedup for the _prepare_inputs average time (from 3.869 ms to 1.496 ms) and an overall E2E performance improvement of approximately 1.5% on the Qwen2.5-VL-3B-Instruct model under high load. Using Gemini Code Assist The full guide for Gemini Code Assist can be found on our documentation page , here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command> . Below is a summary of the supported commands. Feature Command Description Code Review /gemini review Performs a code review for the current pull request in its current state. Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state. Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments. Help /gemini help Displays a list of available commands. Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here . Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension . Footnotes Review the Privacy Notices , Generative AI Prohibited Use Policy , Terms of Service , and learn how to configure Gemini Code Assist in GitHub here . Gemini can make mistakes, so double check it and use code with caution . ↩ Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mergify bot added qwen Related to Qwen models v1 labels Jun 21, 2025 gemini-code-assist bot reviewed Jun 21, 2025 View reviewed changes Copy link Contributor gemini-code-assist bot left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Code Review This pull request aims to speed up the MRoPE input preparation by leveraging numpy for calculations on CPU-pinned memory, which is a great approach. The changes look solid and the performance improvement is significant. I have one suggestion to further optimize the new numpy-based function by using vectorized operations instead of nested Python loops. This should provide an additional performance boost and make the code more idiomatic. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/rotary_embedding.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . fix comment … 029f1e3 Signed-off-by: Vadim Gimpelson <[email protected]> WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Jun 23, 2025 WoosukKwon approved these changes Jun 23, 2025 View reviewed changes vllm/model_executor/layers/rotary_embedding.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . fix another comment … 8baa18e Signed-off-by: Vadim Gimpelson <[email protected]> Hide details View details WoosukKwon merged commit 9a3b883 into vllm-project : main Jun 24, 2025 66 of 69 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Member ywang96 commented Jun 24, 2025 Sorry for the late comment but this is great! 👍 1 vadiklyutiy reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Yikun mentioned this pull request Jun 24, 2025 [Bugfix] Sync MRotaryEmbedding interface change to recover CI vllm-project/vllm-ascend#1399 Merged Yikun pushed a commit
to vllm-project/vllm-ascend
that referenced
this pull request Jun 24, 2025 [Bugfix] Sync MRotaryEmbedding interface change to recover CI ( #1399 ) … 5f5800b ### What this PR does / why we need it?
Sync MRotaryEmbedding interface change to recover main CI
( vllm-project/vllm#19939 )
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
CI passed
---------
Signed-off-by: wangli <[email protected]> gmarinho2 pushed a commit
to gmarinho2/vllm
that referenced
this pull request Jun 26, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … 2d7f8c3 Signed-off-by: Vadim Gimpelson <[email protected]> weijinqian0 pushed a commit
to weijinqian0/vllm-ascend
that referenced
this pull request Jun 30, 2025 [Bugfix] Sync MRotaryEmbedding interface change to recover CI ( vllm-p… … f3dc487 …roject#1399 )
### What this PR does / why we need it?
Sync MRotaryEmbedding interface change to recover main CI
( vllm-project/vllm#19939 )
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
CI passed
---------
Signed-off-by: wangli <[email protected]> xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Jun 30, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … 0033778 Signed-off-by: Vadim Gimpelson <[email protected]> wseaton pushed a commit
to wseaton/vllm
that referenced
this pull request Jun 30, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … 874817e Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Will Eaton <[email protected]> wseaton pushed a commit
to wseaton/vllm
that referenced
this pull request Jun 30, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … 3c936c6 Signed-off-by: Vadim Gimpelson <[email protected]> wwl2755-google pushed a commit
to wwl2755-google/vllm
that referenced
this pull request Jul 1, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … 4807582 Signed-off-by: Vadim Gimpelson <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … f9327f0 Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: avigny <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [PERF] Speedup of MRoPE prepare inputs ( vllm-project#19939 ) … f84ab7e Signed-off-by: Vadim Gimpelson <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:49
|
7661e92ef85e552936195ae4b803e292b9a96776
|
https://github.com/vllm-project/vllm/pull/19249
| false | false | false | true |
TEST: test, test, test
|
Copy link Collaborator jeejeelee commented Jun 6, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Essential Elements of an Effective PR Description Checklist The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". The test plan, such as providing test command. The test results, such as pasting the results comparison before and after, or e2e results Purpose Test Plan Test Result Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Done … 57ae581 Signed-off-by: Jee Jee Li <[email protected]> Copy link Contributor gemini-code-assist bot commented Jun 6, 2025 Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . jeejeelee commented Jun 6, 2025 View reviewed changes vllm/model_executor/models/nemotron_h.py @@ -435,7 +444,6 @@ class NemotronHForCausalLM(nn.Module, HasInnerState, SupportsLoRA, SupportsPP, "k_proj", "v_proj", ], "gate_up_proj": ["up_proj", "down_proj"] Copy link Collaborator Author jeejeelee Jun 6, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It's incorrect property, delete it Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions jeejeelee commented Jun 6, 2025 View reviewed changes vllm/model_executor/models/nemotron_h.py ) -> None: super().__init__() self.up_proj = MergedColumnParallelLinear ( self.up_proj = ColumnParallelLinear ( Copy link Collaborator Author jeejeelee Jun 6, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Use ColumnParallelLinear , there's no need to use MergedColumnParallelLinear Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions jeejeelee requested a review
from DarkLight1337 June 6, 2025 03:52 Copy link github-actions bot commented Jun 6, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 approved these changes Jun 6, 2025 View reviewed changes Copy link Member DarkLight1337 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for simplifying! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions DarkLight1337 enabled auto-merge (squash) June 6, 2025 08:10 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Jun 6, 2025 Hide details View details DarkLight1337 merged commit 7661e92 into vllm-project : main Jun 6, 2025 79 checks passed Uh oh! There was an error while loading. Please reload this page . jeejeelee deleted the fix-nemotron_h branch June 6, 2025 10:31 minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [Model] Optimize nemotron_h implementation ( vllm-project#19249 ) … 8ba4ebe Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: minpeter <[email protected]> avigny pushed a commit
to avigny/vllm
that referenced
this pull request Jul 31, 2025 [Model] Optimize nemotron_h implementation ( vllm-project#19249 ) … 80ed32a Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: avigny <[email protected]> googlercolin pushed a commit
to googlercolin/vllm
that referenced
this pull request Aug 29, 2025 [Model] Optimize nemotron_h implementation ( vllm-project#19249 ) … 9cf44d5 Signed-off-by: Jee Jee Li <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:52
|
e7523c2e031bc96740723ab63833d1cf94229ab4
|
https://github.com/vllm-project/vllm/pull/18608
| false | true | true | true |
PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, vllm serve, Serving | TEST: test, test, Test
|
Copy link Contributor lgeiger commented May 23, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR replaces flashinfer.sampling.top_k_top_p_sampling_from_probs with flashinfer.sampling.top_k_top_p_sampling_from_logits . The top_k_top_p_sampling_from_props path calls (softmax) -> top_k_renorm_probs -> top_p_sampling_from_probs , top_k_top_p_sampling_from_logits calls top_k_mask_logits -> softmax -> top_p_sampling_from_probs which is faster. In a quick micro benchmark on an L40s GPU I am seeing a 9.3 % speedup with this PR and jitted flashinfer using Cuda 12.8. Expand for script to reproduce toy benchmark import time import torch import flashinfer . sampling from vllm . platforms import current_platform from vllm . utils import STR_DTYPE_TO_TORCH_DTYPE , FlexibleArgumentParser @ torch . inference_mode () def main ( batch_size : int , num_classes : int , dtype : torch . dtype , seed : int = 0 , num_warmup_iters : int = 5 , num_iters : int = 100 ,
) -> None : current_platform . seed_everything ( seed ) torch . set_default_device ( "cuda" ) logits = torch . randn ( batch_size , num_classes , dtype = dtype ) k = torch . ones ( batch_size , dtype = torch . int32 ) * 64 p = torch . ones ( batch_size , dtype = dtype ) * 0.95 def run_cuda_benchmark ( num_iters : int ) -> float : torch . cuda . synchronize () start_time = time . perf_counter () for _ in range ( num_iters ): # probs = logits.softmax(dim=-1, dtype=torch.float32) # next_token_ids = flashinfer.sampling.top_k_top_p_sampling_from_probs( # probs, k, p, deterministic=True) next_token_ids = flashinfer . sampling . top_k_top_p_sampling_from_logits ( logits , k , p , deterministic = True ) torch . cuda . synchronize () end_time = time . perf_counter () return ( end_time - start_time ) / num_iters print ( "Warming up..." ) run_benchmark = run_cuda_benchmark run_benchmark ( num_iters = num_warmup_iters ) latency = run_benchmark ( num_iters = num_iters ) print ( f"Kernel running time: { latency * 1000000 :.3f } us" ) if __name__ == "__main__" : parser = FlexibleArgumentParser ( description = "Benchmark the layernorm kernel." ) parser . add_argument ( "--batch-size" , type = int , default = 40 ) parser . add_argument ( "--num-classes" , type = int , default = 262208 ) parser . add_argument ( "--add-residual" , action = "store_true" ) parser . add_argument ( "--dtype" , type = str , choices = [ "half" , "bfloat16" , "float" ], default = "float" ) parser . add_argument ( "--seed" , type = int , default = 0 ) parser . add_argument ( "--num-warmup-iters" , type = int , default = 5 ) parser . add_argument ( "--num-iters" , type = int , default = 5000 , help = "Number of benchmark iterations." ) args = parser . parse_args () print ( args ) main ( batch_size = args . batch_size , num_classes = args . num_classes , dtype = STR_DTYPE_TO_TORCH_DTYPE [ args . dtype ], seed = args . seed , num_warmup_iters = args . num_warmup_iters , num_iters = args . num_iters ,
) End to end this also results in a 1.75 % improvement in throughput for google/gemma-3-12b-it : vllm serve google/gemma-3-12b-it --disable-log-requests
python benchmarks/benchmark_serving.py --backend openai-chat --model google/gemma-3-12b-it --endpoint /v1/chat/completions --dataset-name hf --dataset-path lmarena-ai/VisionArena-Chat --hf-split train --num-prompts 1000 Baseline : ============ Serving Benchmark Result ============
Successful requests: 984
Benchmark duration (s): 187.19
Total input tokens: 95362
Total generated tokens: 115951
Request throughput (req/s): 5.26
Output token throughput (tok/s): 619.43
Total Token throughput (tok/s): 1128.87
---------------Time to First Token----------------
Mean TTFT (ms): 92076.57
Median TTFT (ms): 87454.65
P99 TTFT (ms): 176229.15
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 123.15
Median TPOT (ms): 126.41
P99 TPOT (ms): 474.41
---------------Inter-token Latency----------------
Mean ITL (ms): 134.43
Median ITL (ms): 65.22
P99 ITL (ms): 592.16
================================================== This PR : ============ Serving Benchmark Result ============
Successful requests: 984
Benchmark duration (s): 184.04
Total input tokens: 95362
Total generated tokens: 116033
Request throughput (req/s): 5.35
Output token throughput (tok/s): 630.47
Total Token throughput (tok/s): 1148.62
---------------Time to First Token----------------
Mean TTFT (ms): 90823.37
Median TTFT (ms): 85678.72
P99 TTFT (ms): 175009.27
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 120.16
Median TPOT (ms): 125.98
P99 TPOT (ms): 444.52
---------------Inter-token Latency----------------
Mean ITL (ms): 133.26
Median ITL (ms): 65.33
P99 ITL (ms): 592.79
================================================== Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions lgeiger requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners May 23, 2025 11:41 Copy link github-actions bot commented May 23, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label May 23, 2025 lgeiger force-pushed the flashinfer-sample-logits branch
from 2eb6e2f to a33f48e Compare May 23, 2025 11:43 mgoin reviewed May 23, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks reasonable to me, thanks for the performance analysis. Just a nit Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/sample/ops/topk_topp_sampler.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . lgeiger force-pushed the flashinfer-sample-logits branch
from 561d1d4 to 1046e20 Compare May 23, 2025 23:38 mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label May 24, 2025 lgeiger changed the title [Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs May 25, 2025 Ubuntu and others added 2 commits May 25, 2025 23:32 [Sampler] Use FlashInfer sampling from logits … 005f201 Signed-off-by: Lukas Geiger <[email protected]> Update docstrings … f3eecb9 Signed-off-by: Lukas Geiger <[email protected]> lgeiger force-pushed the flashinfer-sample-logits branch
from 1046e20 to f3eecb9 Compare May 25, 2025 22:32 mgoin approved these changes May 26, 2025 View reviewed changes Hide details View details mgoin merged commit e7523c2 into vllm-project : main May 26, 2025 62 checks passed Uh oh! There was an error while loading. Please reload this page . lgeiger deleted the flashinfer-sample-logits branch May 26, 2025 15:55 gshtras added a commit
to ROCm/vllm
that referenced
this pull request May 27, 2025 Upstream merge 2025 05 27 ( #557 ) … 1900335 * Add files via uploadAdd fused MoE kernel tuning configs (fp8_w8a8) for DeepSeek V3/R1 on a single-node 8x NVIDIA H20 96GB setup ( vllm-project#18337 )
* [Misc] Fix typo ( vllm-project#18330 )
* Neuron up mistral ( vllm-project#18222 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* fix CUDA_check redefinition in vllm-project#17918 ( vllm-project#18287 )
Signed-off-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
* [neuron] fix authorization issue ( vllm-project#18364 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Misc] Allow `AutoWeightsLoader` to skip loading weights with specific substr in name ( vllm-project#18358 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] [Bugfix]: tensor parallel with prompt embeds ( vllm-project#18171 )
Signed-off-by: Nan2018 <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
* [release] Change dockerhub username for TPU release ( vllm-project#18389 )
* [Bugfix] fix adding bias twice in ipex GPTQ quantization ( vllm-project#18363 )
Signed-off-by: rand-fly <[email protected]>
* [doc] update env variable export ( vllm-project#18391 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Add LoRA code owner ( vllm-project#18387 )
Signed-off-by: Jee Jee Li <[email protected]>
* Update cpu.txt ( vllm-project#18398 )
Signed-off-by: 汪志鹏 <[email protected]>
* [CI] Add mteb testing to test the accuracy of the embedding model ( vllm-project#17175 )
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )
Co-authored-by: 松灵 <[email protected]>
* [Misc] refactor prompt embedding examples ( vllm-project#18405 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Minor] Rename quantization nvfp4 to modelopt_fp4 ( vllm-project#18356 )
Signed-off-by: mgoin <[email protected]>
* [Model] use AutoWeightsLoader for bloom ( vllm-project#18300 )
Signed-off-by: calvin chen <[email protected]>
* [Kernel] update comment for KV shape in unified triton attn ( vllm-project#18099 )
Signed-off-by: haochengxia <[email protected]>
* fix:Build torch wheel inline rather than picking from nightly ( vllm-project#18351 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
* [TPU] Re-enable the Pallas MoE kernel ( vllm-project#18025 )
Signed-off-by: Michael Goin <[email protected]>
* [Bugfix] config.head_dim is now explicitly set to None ( vllm-project#18432 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [Bug] Fix moe_sum signature ( vllm-project#18440 )
Signed-off-by: Bill Nell <[email protected]>
* Revert "[Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )" ( vllm-project#18456 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Failing Test] Fix nixl connector test when promt size < block size ( vllm-project#18429 )
Signed-off-by: wwl2755 <[email protected]>
* [Misc] MultiConnector._connectors type ( vllm-project#18423 )
Signed-off-by: nicklucche <[email protected]>
* [Frontend] deprecate `--device` arg ( vllm-project#18399 )
Signed-off-by: Kebe <[email protected]>
* [V1] Fix general plugins not loaded in engine for multiproc ( vllm-project#18326 )
Signed-off-by: Yong Hoon Shin <[email protected]>
* [Misc] refactor disaggregated-prefill-v1 example ( vllm-project#18474 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Failing Test] Fix test_events.py ( vllm-project#18460 )
Signed-off-by: rabi <[email protected]>
* [MODEL] FalconH1 ( vllm-project#18406 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [Doc] fix arg docstring in linear layers ( vllm-project#18410 )
Signed-off-by: giantcroc <[email protected]>
* [Bugfix] Reduce moe_sum test size to avoid OOM ( vllm-project#18484 )
Signed-off-by: Bill Nell <[email protected]>
* [Build] fix Dockerfile shell ( vllm-project#18402 )
* [Misc] Update deprecation message for `--enable-reasoning` ( vllm-project#18404 )
* [ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 ( vllm-project#17004 )
Signed-off-by: Hosang Yoon <[email protected]>
* Remove incorrect env value
* Revert "[v1] Support multiple KV cache groups in GPU model runner ( vllm-project#17945 ) ( vllm-project#18459 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [FEAT][ROCm] Upgrade AITER MLA v1 backend ( vllm-project#18338 )
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
* [Bugfix] Consistent ascii handling in tool parsers ( vllm-project#17704 )
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
* [FalconH1] Fix output dtype in RMSNorm fallback path for Falcon-H1 (e.g. 0.5B) ( vllm-project#18500 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [MISC] update project urls in pyproject.toml ( vllm-project#18519 )
Signed-off-by: Andy Xie <[email protected]>
* [CI] Fix race condition with StatelessProcessGroup.barrier ( vllm-project#18506 )
Signed-off-by: Russell Bryant <[email protected]>
* Intialize io_thread_pool attribute in the beginning. ( vllm-project#18331 )
Signed-off-by: rabi <[email protected]>
* [Bugfix] Inconsistent token calculation compared to HF in llava family ( vllm-project#18479 )
Signed-off-by: jaycha <[email protected]>
* [BugFix][DP] Send DP wave completion only from `dp_rank==0` ( vllm-project#18502 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
* [Bugfix][Model] Make Olmo2Model weight loading return loaded weights ( vllm-project#18504 )
Signed-off-by: Shane A <[email protected]>
* [Bugfix] Fix LoRA test ( vllm-project#18518 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Fix invalid JSON in example args ( vllm-project#18527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Neuron] Update Dockerfile.neuron to use latest neuron release (2.23) ( vllm-project#18512 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* Update default neuron config for speculation ( vllm-project#18274 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* Order sequence ids + config update to support specifying custom quantization layers ( vllm-project#18279 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18526 )
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Bugfix] Add kwargs to RequestOutput __init__ to be forward compatible ( vllm-project#18513 )
Signed-off-by: Linkun <[email protected]>
* [CI/Build] Update bamba test model location ( vllm-project#18544 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc] Support --stream arg in openai_completion_client.py script ( vllm-project#18388 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Use random hidden states in dummy sampler run ( vllm-project#18543 )
Signed-off-by: Bowen Wang <[email protected]>
* [Doc] Add stream flag for chat completion example ( vllm-project#18524 )
Signed-off-by: calvin chen <[email protected]>
* [BugFix][CPU] Fix x86 SHM distributed module initialization ( vllm-project#18536 )
Signed-off-by: jiang.li <[email protected]>
* [Misc] improve Automatic Prefix Caching example ( vllm-project#18554 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Call `ndarray.tobytes()` directly instead of `ndarray.data.tobytes()` ( vllm-project#18347 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Bugfix] make `test_openai_schema.py` pass ( vllm-project#18224 )
Signed-off-by: David Xia <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [Platform] Move platform check to right place ( vllm-project#18470 )
Signed-off-by: wangxiyuan <[email protected]>
* [Compile][Platform] Make PiecewiseBackend pluggable and extendable ( vllm-project#18076 )
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Build/CI] Fix CUDA 11.8 build ( vllm-project#17679 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Tool] Add NIXL installation script ( vllm-project#18172 )
Signed-off-by: Linkun <[email protected]>
* [V1][Spec Decode][Bugfix] Load quantize weights for EAGLE ( vllm-project#18290 )
* [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser ( vllm-project#17917 )
Signed-off-by: Kai Wu <[email protected]>
* [Frontend] [Core] Add Tensorizer support for V1, LoRA adapter serialization and deserialization ( vllm-project#17926 )
Signed-off-by: Sanger Steel <[email protected]>
* [AMD] [P/D] Compute num gpus for ROCm correctly in run_accuracy_test.sh ( vllm-project#18568 )
Signed-off-by: Randall Smith <[email protected]>
* Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. ( vllm-project#18569 )
Signed-off-by: Chenheli Hua <[email protected]>
* [V1][Spec Decoding] Use model_loader.get_model() to load models ( vllm-project#18273 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Enable hybrid attention models for Transformers backend ( vllm-project#18494 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] refactor: simplify input validation and num_requests handling in _convert_v1_inputs ( vllm-project#18482 )
Signed-off-by: googs1025 <[email protected]>
* [BugFix] Increase TP execute_model timeout ( vllm-project#18558 )
Signed-off-by: Nick Hill <[email protected]>
* [Bugfix] Set `KVTransferConfig.engine_id` in post_init ( vllm-project#18576 )
Signed-off-by: Linkun Chen <[email protected]>
* [Spec Decode] Make EAGLE3 draft token ID mapping optional ( vllm-project#18488 )
Signed-off-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Neuron] Remove bypass on EAGLEConfig and add a test ( vllm-project#18514 )
Signed-off-by: Elaine Zhao <[email protected]>
* [Bugfix][Benchmarks] Fix a benchmark of deepspeed-mii backend to use api_key ( vllm-project#17291 )
Signed-off-by: Teruaki Ishizaki <[email protected]>
* [Misc] Replace `cuda` hard code with `current_platform` ( vllm-project#16983 )
Signed-off-by: shen-shanshan <[email protected]>
* [Hardware] correct method signatures for HPU,ROCm,XPU ( vllm-project#18551 )
Signed-off-by: Andy Xie <[email protected]>
* [V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )
Signed-off-by: Ronald Xu <[email protected]>
* [Feature]Add async tensor parallelism using compilation pass ( vllm-project#17882 )
Signed-off-by: cascade812 <[email protected]>
* [Doc] Update quickstart and install for cu128 using `--torch-backend=auto` ( vllm-project#18505 )
Signed-off-by: mgoin <[email protected]>
* [Feature][V1]: suupports cached_tokens in response usage ( vllm-project#18149 )
Co-authored-by: simon-mo <[email protected]>
* [Bugfix] Add half type support in reshape_and_cache_cpu_impl on x86 cpu platform ( vllm-project#18430 )
Signed-off-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
* Migrate docs from Sphinx to MkDocs ( vllm-project#18145 )
Signed-off-by: Harry Mellor <[email protected]>
* Revert "[V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )" ( vllm-project#18600 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Model] Fix baichuan model loader for tp ( vllm-project#18597 )
Signed-off-by: Mengqing Cao <[email protected]>
* [V0][Bugfix] Fix parallel sampling performance regression when guided decoding is enabled ( vllm-project#17731 )
Signed-off-by: Madeesh Kannan <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
* Add myself as docs code owner ( vllm-project#18605 )
Signed-off-by: Harry Mellor <[email protected]>
* [Hardware][CPU] Update intel_extension_for_pytorch 2.7.0 and move to `requirements/cpu.txt` ( vllm-project#18542 )
Signed-off-by: Kay Yan <[email protected]>
* [CI] fix kv_cache_type argument ( vllm-project#18594 )
Signed-off-by: Andy Xie <[email protected]>
* [Doc] Fix indent of contributing to vllm ( vllm-project#18611 )
Signed-off-by: Zerohertz <[email protected]>
* Replace `{func}` with mkdocs style links ( vllm-project#18610 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Fix V1 flag being set in entrypoints tests ( vllm-project#18598 )
Signed-off-by: DarkLight1337 <[email protected]>
* Fix examples with code blocks in docs ( vllm-project#18609 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix transformers model impl ignored for mixtral quant ( vllm-project#18602 )
Signed-off-by: Tristan Leclercq <[email protected]>
* Include private attributes in API documentation ( vllm-project#18614 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] add Haystack integration ( vllm-project#18601 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Build/CI] Fixup CUDA compiler version check for CUDA_SUPPORTED_ARCHS ( vllm-project#18579 )
* [Doc] Fix markdown list indentation for MkDocs rendering ( vllm-project#18620 )
Signed-off-by: Zerohertz <[email protected]>
* [Doc] Use a different color for the announcement ( vllm-project#18616 )
Signed-off-by: DarkLight1337 <[email protected]>
* Refactor pplx init logic to make it modular (prepare for deepep) ( vllm-project#18200 )
Signed-off-by: youkaichao <[email protected]>
* Fix figures in design doc ( vllm-project#18612 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Change mkdocs to not use directory urls ( vllm-project#18622 )
Signed-off-by: mgoin <[email protected]>
* [v1] Redo "Support multiple KV cache groups in GPU model runner ( vllm-project#17945 )" ( vllm-project#18593 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc] fix list formatting ( vllm-project#18624 )
Signed-off-by: David Xia <[email protected]>
* [Doc] Fix top-level API links/docs ( vllm-project#18621 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Avoid documenting dynamic / internal modules ( vllm-project#18626 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix broken links and unlinked docs, add shortcuts to home sidebar ( vllm-project#18627 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support Deepseek MTP ( vllm-project#18435 )
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
* Use prebuilt FlashInfer x86_64 PyTorch 2.7 CUDA 12.8 wheel for CI ( vllm-project#18537 )
Signed-off-by: Huy Do <[email protected]>
* [CI] Enable test_initialization to run on V1 ( vllm-project#16736 )
Signed-off-by: mgoin <[email protected]>
* [Doc] Update references to doc files ( vllm-project#18637 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ModelOpt] Introduce VLLM_MAX_TOKENS_PER_EXPERT_FP4_MOE env var to control blockscale tensor allocation ( vllm-project#18160 )
Signed-off-by: Pavani Majety <[email protected]>
* [Bugfix] Migrate to REGEX Library to prevent catastrophic backtracking ( vllm-project#18454 )
Signed-off-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
* [Bugfix][Nixl] Fix Preemption Bug ( vllm-project#18631 )
Signed-off-by: [email protected] <[email protected]>
* config.py: Clarify that only local GGUF checkpoints are supported. ( vllm-project#18623 )
Signed-off-by: Mathieu Bordere <[email protected]>
* FIX MOE issue in AutoRound format ( vllm-project#18586 )
Signed-off-by: wenhuach21 <[email protected]>
* [V1][Spec Decode] Small refactors to improve eagle bookkeeping performance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
* [Frontend] improve vllm serve --help display ( vllm-project#18643 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Model] Add support for Qwen2.5-Omni-7B-AWQ (Qwen2_5OmniForConditionalGeneration) ( vllm-project#18647 )
* [V1][Spec Decode] Support multi-layer eagle draft model ( vllm-project#18030 )
Signed-off-by: qizixi <[email protected]>
* [Doc] Update README links, mark external links ( vllm-project#18635 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC][pre-commit] Add pre-commit check for triton import ( vllm-project#17716 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Doc] Fix indentation problems in V0 Paged Attention docs ( vllm-project#18659 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add community links ( vllm-project#18657 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] use AutoWeightsLoader for gpt2 ( vllm-project#18625 )
Signed-off-by: zt2370 <[email protected]>
* [Doc] Reorganize user guide ( vllm-project#18661 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] `chmod +x` to `cleanup_pr_body.sh` ( vllm-project#18650 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC] typo fix and clean import ( vllm-project#18664 )
Signed-off-by: Andy Xie <[email protected]>
* [BugFix] Fix import error for fused_moe ( vllm-project#18642 )
Signed-off-by: wangxiyuan <[email protected]>
* [CI] enforce import regex instead of re ( vllm-project#18665 )
Signed-off-by: Aaron Pham <[email protected]>
* fix(regression): clone from reference items ( vllm-project#18662 )
Signed-off-by: Aaron Pham <[email protected]>
* [CI/Build] fix permission denied issue ( vllm-project#18645 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [BugFix][Spec Decode] Improve Prefix Caching Logic in Speculative Decoding ( vllm-project#18668 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix _pickle.PicklingError: Can't pickle <class 'transformers_modules.deepseek-ai.DeepSeek-V2-Lite... ( vllm-project#18640 )
Signed-off-by: Seiji Eicher <[email protected]>
* [MISC] correct signature for LoaderFunction ( vllm-project#18670 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc]Replace `cuda` hard code with `current_platform` in Ray ( vllm-project#14668 )
Signed-off-by: noemotiovon <[email protected]>
* [Misc][ModelScope] Change to use runtime VLLM_USE_MODELSCOPE ( vllm-project#18655 )
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM] Initialize video input support for InternVL models ( vllm-project#18499 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* Speed up the `kernels/quantization/` tests ( vllm-project#18669 )
Signed-off-by: mgoin <[email protected]>
* [BUGFIX] catch subclass first for try...except ( vllm-project#18672 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc] Reduce logs on startup ( vllm-project#18649 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] fix broken links ( vllm-project#18671 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [doc] improve readability ( vllm-project#18675 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix cpu usage and cache hit stats reporting on cpu environment ( vllm-project#18674 )
Signed-off-by: zzzyq <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [CI/build] fix no regex ( vllm-project#18676 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] small improve ( vllm-project#18680 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix profiling dummy data for Pixtral ( vllm-project#18677 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Core][Multimodal] Convert PIL Image to array without data copy when hashing ( vllm-project#18682 )
Signed-off-by: Lukas Geiger <[email protected]>
* [CI/Build][Doc] Update `gte-Qwen2-1.5B-instruct` usage ( vllm-project#18683 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Misc] Fixed the abnormally high TTFT issue in the PD disaggregation example ( vllm-project#18644 )
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
* refactor: simplify request handler, use positive condition check for handler assignment ( vllm-project#18690 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Fix the lm_head in gpt_bigcode in lora mode ( vllm-project#6357 )
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
* [CI] add missing argument ( vllm-project#18694 )
Signed-off-by: Andy Xie <[email protected]>
* [GH] Add issue template for reporting CI failures ( vllm-project#18696 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix issue template format ( vllm-project#18699 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Mistral-format models with sliding window ( vllm-project#18693 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Replace `math.isclose` with `pytest.approx` ( vllm-project#18703 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] fix dump_input for str type ( vllm-project#18697 )
Signed-off-by: Andy Xie <[email protected]>
* [Model] Add support for YARN in NemotronNAS models ( vllm-project#18427 )
Signed-off-by: Nave Assaf <[email protected]>
* [CI/Build] Split pooling and generation extended language models tests in CI ( vllm-project#18705 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Add tensor parallel size = 2 test to HPU CI ( vllm-project#18709 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Misc] add AutoGen integration ( vllm-project#18712 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix]: handle hf-xet CAS error when loading Qwen3 weights in vLLM ( vllm-project#18701 )
* [Doc] Improve API docs ( vllm-project#18713 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples and further reorganize user guide ( vllm-project#18666 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Llama GGUF initialization ( vllm-project#18717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs ( vllm-project#18608 )
* Convert `examples` to `ruff-format` ( vllm-project#18400 )
Signed-off-by: Harry Mellor <[email protected]>
* [Model][Gemma3] Simplify image input validation ( vllm-project#18710 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Misc] improve web section group title display ( vllm-project#18684 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [V1][Quantization] Add CUDA graph compatible v1 GGUF support ( vllm-project#18646 )
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
* [Model][Gemma3] Cast image pixel values already on CPU ( vllm-project#18732 )
Signed-off-by: Lukas Geiger <[email protected]>
* [FEAT] [ROCm] Upgrade AITER Fused MoE kernels. ( vllm-project#18271 )
Signed-off-by: vllmellm <[email protected]>
* [Doc] Update OOT model docs ( vllm-project#18742 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Update reproducibility doc and example ( vllm-project#18741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] improve docs ( vllm-project#18734 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* feat(rocm-support): support mamba2 on rocm ( vllm-project#18565 )
Signed-off-by: Islam Almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Fix multiple containers using the same name in run-hpu-test.sh ( vllm-project#18752 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Doc] cleanup deprecated flag for doc ( vllm-project#18715 )
Signed-off-by: calvin chen <[email protected]>
* Minor fix about MooncakeStoreConnector ( vllm-project#18721 )
Signed-off-by: baoloongmao <[email protected]>
* [Build] fix cpu build missing libtbbmalloc.so ( vllm-project#18744 )
Signed-off-by: Kebe <[email protected]>
* [BUG FIX] minicpm ( vllm-project#18739 )
Signed-off-by: huangyuxiang03 <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
* [Doc] Convert Sphinx directives ( `{class}`, `{meth}`, `{attr}`, ...) to MkDocs format for better documentation linking ( vllm-project#18663 )
Signed-off-by: Zerohertz <[email protected]>
* [CI/Build] Remove imports of built-in `re` ( vllm-project#18750 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Metrics] Add API for accessing in-memory Prometheus metrics ( vllm-project#17010 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Disable prefix cache by default for benchmark ( vllm-project#18639 )
Signed-off-by: cascade812 <[email protected]>
* optimize get_kv_cache_torch_dtype ( vllm-project#18531 )
Signed-off-by: idellzheng <[email protected]>
* [Core] Automatically cast multi-modal input dtype ( vllm-project#18756 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Mistral tool calling when content is list ( vllm-project#18729 )
Signed-off-by: mgoin <[email protected]>
---------
Signed-off-by: Satyajith Chilappagari <[email protected]>
Signed-off-by: Lucia Fang <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Nan2018 <[email protected]>
Signed-off-by: rand-fly <[email protected]>
Signed-off-by: reidliu41 <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: 汪志鹏 <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: calvin chen <[email protected]>
Signed-off-by: haochengxia <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wwl2755 <[email protected]>
Signed-off-by: nicklucche <[email protected]>
Signed-off-by: Kebe <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: rabi <[email protected]>
Signed-off-by: dhia.rhaiem <[email protected]>
Signed-off-by: giantcroc <[email protected]>
Signed-off-by: Hosang Yoon <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
Signed-off-by: Andy Xie <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: jaycha <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Shane A <[email protected]>
Signed-off-by: Elaine Zhao <[email protected]>
Signed-off-by: Linkun <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: googs1025 <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: jiang.li <[email protected]>
Signed-off-by: Lukas Geiger <[email protected]>
Signed-off-by: David Xia <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kai Wu <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Teruaki Ishizaki <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Ronald Xu <[email protected]>
Signed-off-by: cascade812 <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]>
Signed-off-by: Madeesh Kannan <[email protected]>
Signed-off-by: Kay Yan <[email protected]>
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Tristan Leclercq <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Pavani Majety <[email protected]>
Signed-off-by: Crucifixion-Fxl <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Mathieu Bordere <[email protected]>
Signed-off-by: wenhuach21 <[email protected]>
Signed-off-by: qizixi <[email protected]>
Signed-off-by: zt2370 <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Seiji Eicher <[email protected]>
Signed-off-by: noemotiovon <[email protected]>
Signed-off-by: zzzyq <[email protected]>
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Nave Assaf <[email protected]>
Signed-off-by: Lukasz Durejko <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Islam Almersawi <[email protected]>
Signed-off-by: baoloongmao <[email protected]>
Signed-off-by: huangyuxiang03 <[email protected]>
Signed-off-by: idellzheng <[email protected]>
Co-authored-by: sunyicode0012 <[email protected]>
Co-authored-by: Gong Shufan <[email protected]>
Co-authored-by: Satyajith Chilappagari <[email protected]>
Co-authored-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Nan Qin <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Random Fly <[email protected]>
Co-authored-by: Reid <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: 汪志鹏 <[email protected]>
Co-authored-by: wang.yuqi <[email protected]>
Co-authored-by: 燃 <[email protected]>
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Calvin Chen <[email protected]>
Co-authored-by: Percy <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: wwl2755 <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Kebe <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Rabi Mishra <[email protected]>
Co-authored-by: Dhia Eddine Rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
Co-authored-by: GiantCroc <[email protected]>
Co-authored-by: Hyogeun Oh (오효근) <[email protected]>
Co-authored-by: Hosang <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Ning Xie <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: youngrok cha <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
Co-authored-by: Shane A <[email protected]>
Co-authored-by: aws-elaineyz <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: CYJiang <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Lukas Geiger <[email protected]>
Co-authored-by: David Xia <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ekagra Ranjan <[email protected]>
Co-authored-by: Kai Wu <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Chenheli Hua <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Teruaki Ishizaki <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: RonaldBXu <[email protected]>
Co-authored-by: cascade <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Madeesh Kannan <[email protected]>
Co-authored-by: Kay Yan <[email protected]>
Co-authored-by: Tristan Leclercq <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Huy Do <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: Feng XiaoLong <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Mathieu Borderé <[email protected]>
Co-authored-by: Wenhua Cheng <[email protected]>
Co-authored-by: qizixi <[email protected]>
Co-authored-by: Yuanhao WU <[email protected]>
Co-authored-by: ztang2370 <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: Seiji Eicher <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: AlexZhao <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Naveassaf <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: dylan <[email protected]>
Co-authored-by: almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: maobaolong <[email protected]>
Co-authored-by: Shawn Huang <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
Co-authored-by: chunxiaozheng <[email protected]> amitm02 pushed a commit
to amitm02/vllm
that referenced
this pull request Jun 1, 2025 [V1][Sampler] Improve performance of FlashInfer sampling by sampling … … ab2be96 …logits instead of probs ( vllm-project#18608 )
Signed-off-by: amit <[email protected]> minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [V1][Sampler] Improve performance of FlashInfer sampling by sampling … … e158269 …logits instead of probs ( vllm-project#18608 )
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:50:56
|
d55e446d1320d0f5f22bc3584f81f18d7924f166
|
https://github.com/vllm-project/vllm/pull/18424
| false | true | true | true |
PERF: TTFT, profiling | SERVING: vllm serve, serve, Frontend | TEST: test, test, Test
|
Copy link Collaborator zixi-qi commented May 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Applied several small refactors to improve eagle bookkeeping performance: async h2d with pinned memory removed a synchronization point by caching total number of tokens in SpecDecodeMetadata use torch.zeros to replace torch.empty + assignment (h2d) Saves ~50us time per iteration on Llama3 8b w/ bs=2. before after Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 draftbk reacted with thumbs up emoji All reactions 👍 1 reaction zixi-qi requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners May 20, 2025 15:58 Copy link github-actions bot commented May 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label May 20, 2025 Copy link mergify bot commented May 21, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @zixi-qi . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label May 21, 2025 WoosukKwon reviewed May 21, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @zixi-qi Thanks for the PR! Left some minor comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py Outdated Comment on lines 1379 to 1380 num_tokens = spec_decode_metadata.total_num_scheduled_tokens - \ sum(num_rejected_tokens) Copy link Collaborator WoosukKwon May 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: please refrain from using \ Suggested change num_tokens = spec_decode_metadata . total_num_scheduled_tokens - \ sum ( num_rejected_tokens ) num_tokens = ( spec_decode_metadata . total_num_scheduled_tokens - sum ( num_rejected_tokens ) ) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 zixi-qi reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator Author zixi-qi May 23, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks updated! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py Outdated @@ -883,6 +883,7 @@ def _calc_spec_decode_metadata( target_logits_indices=target_logits_indices, bonus_logits_indices=bonus_logits_indices, logits_indices=logits_indices, total_num_scheduled_tokens=cu_num_scheduled_tokens[-1], Copy link Collaborator WoosukKwon May 21, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do we actually need to store it in SpecDecodeMetadata ? I'm wondering because the same variable is available in execute_model . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author zixi-qi May 23, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment You are right, removed the additional field in SpecDecodeMetadata Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link leo-cf-tian commented May 21, 2025 (Follow-up to a deleted comment) I flagged an issue here a few minutes ago and it turns out the error was from the base repo, not this PR. Deleted the earlier comment from earlier and made this one to avoid confusion. Sorry if I cause any trouble. 👍 2 zixi-qi and WoosukKwon reacted with thumbs up emoji All reactions 👍 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot removed
the needs-rebase label May 23, 2025 zixi-qi force-pushed the spec_decode_perf branch
from 20b3864 to d07a3c5 Compare May 23, 2025 21:03 Copy link mergify bot commented May 23, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @zixi-qi . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label May 23, 2025 zixi-qi added 3 commits May 23, 2025 15:18 small perf improvements … 2e5efa8 Signed-off-by: qizixi <[email protected]> address comments … 0e1d9af Signed-off-by: qizixi <[email protected]> rebase … 2945178 Signed-off-by: qizixi <[email protected]> zixi-qi force-pushed the spec_decode_perf branch
from d07a3c5 to 2945178 Compare May 23, 2025 22:21 mergify bot removed
the needs-rebase label May 23, 2025 WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label May 24, 2025 WoosukKwon approved these changes May 24, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon enabled auto-merge (squash) May 24, 2025 03:27 Hide details View details WoosukKwon merged commit d55e446 into vllm-project : main May 24, 2025 71 checks passed Uh oh! There was an error while loading. Please reload this page . zzzyq pushed a commit
to zzzyq/vllm
that referenced
this pull request May 24, 2025 [V1][Spec Decode] Small refactors to improve eagle bookkeeping perfor… … be2ab55 …mance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]> zixi-qi deleted the spec_decode_perf branch May 24, 2025 15:11 zzzyq pushed a commit
to zzzyq/vllm
that referenced
this pull request May 25, 2025 [V1][Spec Decode] Small refactors to improve eagle bookkeeping perfor… … 89c1867 …mance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
Signed-off-by: zzzyq <[email protected]> gshtras added a commit
to ROCm/vllm
that referenced
this pull request May 27, 2025 Upstream merge 2025 05 27 ( #557 ) … 1900335 * Add files via uploadAdd fused MoE kernel tuning configs (fp8_w8a8) for DeepSeek V3/R1 on a single-node 8x NVIDIA H20 96GB setup ( vllm-project#18337 )
* [Misc] Fix typo ( vllm-project#18330 )
* Neuron up mistral ( vllm-project#18222 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* fix CUDA_check redefinition in vllm-project#17918 ( vllm-project#18287 )
Signed-off-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
* [neuron] fix authorization issue ( vllm-project#18364 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Misc] Allow `AutoWeightsLoader` to skip loading weights with specific substr in name ( vllm-project#18358 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] [Bugfix]: tensor parallel with prompt embeds ( vllm-project#18171 )
Signed-off-by: Nan2018 <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
* [release] Change dockerhub username for TPU release ( vllm-project#18389 )
* [Bugfix] fix adding bias twice in ipex GPTQ quantization ( vllm-project#18363 )
Signed-off-by: rand-fly <[email protected]>
* [doc] update env variable export ( vllm-project#18391 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Add LoRA code owner ( vllm-project#18387 )
Signed-off-by: Jee Jee Li <[email protected]>
* Update cpu.txt ( vllm-project#18398 )
Signed-off-by: 汪志鹏 <[email protected]>
* [CI] Add mteb testing to test the accuracy of the embedding model ( vllm-project#17175 )
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )
Co-authored-by: 松灵 <[email protected]>
* [Misc] refactor prompt embedding examples ( vllm-project#18405 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Minor] Rename quantization nvfp4 to modelopt_fp4 ( vllm-project#18356 )
Signed-off-by: mgoin <[email protected]>
* [Model] use AutoWeightsLoader for bloom ( vllm-project#18300 )
Signed-off-by: calvin chen <[email protected]>
* [Kernel] update comment for KV shape in unified triton attn ( vllm-project#18099 )
Signed-off-by: haochengxia <[email protected]>
* fix:Build torch wheel inline rather than picking from nightly ( vllm-project#18351 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
* [TPU] Re-enable the Pallas MoE kernel ( vllm-project#18025 )
Signed-off-by: Michael Goin <[email protected]>
* [Bugfix] config.head_dim is now explicitly set to None ( vllm-project#18432 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [Bug] Fix moe_sum signature ( vllm-project#18440 )
Signed-off-by: Bill Nell <[email protected]>
* Revert "[Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )" ( vllm-project#18456 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Failing Test] Fix nixl connector test when promt size < block size ( vllm-project#18429 )
Signed-off-by: wwl2755 <[email protected]>
* [Misc] MultiConnector._connectors type ( vllm-project#18423 )
Signed-off-by: nicklucche <[email protected]>
* [Frontend] deprecate `--device` arg ( vllm-project#18399 )
Signed-off-by: Kebe <[email protected]>
* [V1] Fix general plugins not loaded in engine for multiproc ( vllm-project#18326 )
Signed-off-by: Yong Hoon Shin <[email protected]>
* [Misc] refactor disaggregated-prefill-v1 example ( vllm-project#18474 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Failing Test] Fix test_events.py ( vllm-project#18460 )
Signed-off-by: rabi <[email protected]>
* [MODEL] FalconH1 ( vllm-project#18406 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [Doc] fix arg docstring in linear layers ( vllm-project#18410 )
Signed-off-by: giantcroc <[email protected]>
* [Bugfix] Reduce moe_sum test size to avoid OOM ( vllm-project#18484 )
Signed-off-by: Bill Nell <[email protected]>
* [Build] fix Dockerfile shell ( vllm-project#18402 )
* [Misc] Update deprecation message for `--enable-reasoning` ( vllm-project#18404 )
* [ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 ( vllm-project#17004 )
Signed-off-by: Hosang Yoon <[email protected]>
* Remove incorrect env value
* Revert "[v1] Support multiple KV cache groups in GPU model runner ( vllm-project#17945 ) ( vllm-project#18459 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [FEAT][ROCm] Upgrade AITER MLA v1 backend ( vllm-project#18338 )
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
* [Bugfix] Consistent ascii handling in tool parsers ( vllm-project#17704 )
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
* [FalconH1] Fix output dtype in RMSNorm fallback path for Falcon-H1 (e.g. 0.5B) ( vllm-project#18500 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [MISC] update project urls in pyproject.toml ( vllm-project#18519 )
Signed-off-by: Andy Xie <[email protected]>
* [CI] Fix race condition with StatelessProcessGroup.barrier ( vllm-project#18506 )
Signed-off-by: Russell Bryant <[email protected]>
* Intialize io_thread_pool attribute in the beginning. ( vllm-project#18331 )
Signed-off-by: rabi <[email protected]>
* [Bugfix] Inconsistent token calculation compared to HF in llava family ( vllm-project#18479 )
Signed-off-by: jaycha <[email protected]>
* [BugFix][DP] Send DP wave completion only from `dp_rank==0` ( vllm-project#18502 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
* [Bugfix][Model] Make Olmo2Model weight loading return loaded weights ( vllm-project#18504 )
Signed-off-by: Shane A <[email protected]>
* [Bugfix] Fix LoRA test ( vllm-project#18518 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Fix invalid JSON in example args ( vllm-project#18527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Neuron] Update Dockerfile.neuron to use latest neuron release (2.23) ( vllm-project#18512 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* Update default neuron config for speculation ( vllm-project#18274 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* Order sequence ids + config update to support specifying custom quantization layers ( vllm-project#18279 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18526 )
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Bugfix] Add kwargs to RequestOutput __init__ to be forward compatible ( vllm-project#18513 )
Signed-off-by: Linkun <[email protected]>
* [CI/Build] Update bamba test model location ( vllm-project#18544 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc] Support --stream arg in openai_completion_client.py script ( vllm-project#18388 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Use random hidden states in dummy sampler run ( vllm-project#18543 )
Signed-off-by: Bowen Wang <[email protected]>
* [Doc] Add stream flag for chat completion example ( vllm-project#18524 )
Signed-off-by: calvin chen <[email protected]>
* [BugFix][CPU] Fix x86 SHM distributed module initialization ( vllm-project#18536 )
Signed-off-by: jiang.li <[email protected]>
* [Misc] improve Automatic Prefix Caching example ( vllm-project#18554 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Call `ndarray.tobytes()` directly instead of `ndarray.data.tobytes()` ( vllm-project#18347 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Bugfix] make `test_openai_schema.py` pass ( vllm-project#18224 )
Signed-off-by: David Xia <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [Platform] Move platform check to right place ( vllm-project#18470 )
Signed-off-by: wangxiyuan <[email protected]>
* [Compile][Platform] Make PiecewiseBackend pluggable and extendable ( vllm-project#18076 )
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Build/CI] Fix CUDA 11.8 build ( vllm-project#17679 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Tool] Add NIXL installation script ( vllm-project#18172 )
Signed-off-by: Linkun <[email protected]>
* [V1][Spec Decode][Bugfix] Load quantize weights for EAGLE ( vllm-project#18290 )
* [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser ( vllm-project#17917 )
Signed-off-by: Kai Wu <[email protected]>
* [Frontend] [Core] Add Tensorizer support for V1, LoRA adapter serialization and deserialization ( vllm-project#17926 )
Signed-off-by: Sanger Steel <[email protected]>
* [AMD] [P/D] Compute num gpus for ROCm correctly in run_accuracy_test.sh ( vllm-project#18568 )
Signed-off-by: Randall Smith <[email protected]>
* Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. ( vllm-project#18569 )
Signed-off-by: Chenheli Hua <[email protected]>
* [V1][Spec Decoding] Use model_loader.get_model() to load models ( vllm-project#18273 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Enable hybrid attention models for Transformers backend ( vllm-project#18494 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] refactor: simplify input validation and num_requests handling in _convert_v1_inputs ( vllm-project#18482 )
Signed-off-by: googs1025 <[email protected]>
* [BugFix] Increase TP execute_model timeout ( vllm-project#18558 )
Signed-off-by: Nick Hill <[email protected]>
* [Bugfix] Set `KVTransferConfig.engine_id` in post_init ( vllm-project#18576 )
Signed-off-by: Linkun Chen <[email protected]>
* [Spec Decode] Make EAGLE3 draft token ID mapping optional ( vllm-project#18488 )
Signed-off-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Neuron] Remove bypass on EAGLEConfig and add a test ( vllm-project#18514 )
Signed-off-by: Elaine Zhao <[email protected]>
* [Bugfix][Benchmarks] Fix a benchmark of deepspeed-mii backend to use api_key ( vllm-project#17291 )
Signed-off-by: Teruaki Ishizaki <[email protected]>
* [Misc] Replace `cuda` hard code with `current_platform` ( vllm-project#16983 )
Signed-off-by: shen-shanshan <[email protected]>
* [Hardware] correct method signatures for HPU,ROCm,XPU ( vllm-project#18551 )
Signed-off-by: Andy Xie <[email protected]>
* [V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )
Signed-off-by: Ronald Xu <[email protected]>
* [Feature]Add async tensor parallelism using compilation pass ( vllm-project#17882 )
Signed-off-by: cascade812 <[email protected]>
* [Doc] Update quickstart and install for cu128 using `--torch-backend=auto` ( vllm-project#18505 )
Signed-off-by: mgoin <[email protected]>
* [Feature][V1]: suupports cached_tokens in response usage ( vllm-project#18149 )
Co-authored-by: simon-mo <[email protected]>
* [Bugfix] Add half type support in reshape_and_cache_cpu_impl on x86 cpu platform ( vllm-project#18430 )
Signed-off-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
* Migrate docs from Sphinx to MkDocs ( vllm-project#18145 )
Signed-off-by: Harry Mellor <[email protected]>
* Revert "[V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )" ( vllm-project#18600 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Model] Fix baichuan model loader for tp ( vllm-project#18597 )
Signed-off-by: Mengqing Cao <[email protected]>
* [V0][Bugfix] Fix parallel sampling performance regression when guided decoding is enabled ( vllm-project#17731 )
Signed-off-by: Madeesh Kannan <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
* Add myself as docs code owner ( vllm-project#18605 )
Signed-off-by: Harry Mellor <[email protected]>
* [Hardware][CPU] Update intel_extension_for_pytorch 2.7.0 and move to `requirements/cpu.txt` ( vllm-project#18542 )
Signed-off-by: Kay Yan <[email protected]>
* [CI] fix kv_cache_type argument ( vllm-project#18594 )
Signed-off-by: Andy Xie <[email protected]>
* [Doc] Fix indent of contributing to vllm ( vllm-project#18611 )
Signed-off-by: Zerohertz <[email protected]>
* Replace `{func}` with mkdocs style links ( vllm-project#18610 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Fix V1 flag being set in entrypoints tests ( vllm-project#18598 )
Signed-off-by: DarkLight1337 <[email protected]>
* Fix examples with code blocks in docs ( vllm-project#18609 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix transformers model impl ignored for mixtral quant ( vllm-project#18602 )
Signed-off-by: Tristan Leclercq <[email protected]>
* Include private attributes in API documentation ( vllm-project#18614 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] add Haystack integration ( vllm-project#18601 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Build/CI] Fixup CUDA compiler version check for CUDA_SUPPORTED_ARCHS ( vllm-project#18579 )
* [Doc] Fix markdown list indentation for MkDocs rendering ( vllm-project#18620 )
Signed-off-by: Zerohertz <[email protected]>
* [Doc] Use a different color for the announcement ( vllm-project#18616 )
Signed-off-by: DarkLight1337 <[email protected]>
* Refactor pplx init logic to make it modular (prepare for deepep) ( vllm-project#18200 )
Signed-off-by: youkaichao <[email protected]>
* Fix figures in design doc ( vllm-project#18612 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Change mkdocs to not use directory urls ( vllm-project#18622 )
Signed-off-by: mgoin <[email protected]>
* [v1] Redo "Support multiple KV cache groups in GPU model runner ( vllm-project#17945 )" ( vllm-project#18593 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc] fix list formatting ( vllm-project#18624 )
Signed-off-by: David Xia <[email protected]>
* [Doc] Fix top-level API links/docs ( vllm-project#18621 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Avoid documenting dynamic / internal modules ( vllm-project#18626 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix broken links and unlinked docs, add shortcuts to home sidebar ( vllm-project#18627 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support Deepseek MTP ( vllm-project#18435 )
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
* Use prebuilt FlashInfer x86_64 PyTorch 2.7 CUDA 12.8 wheel for CI ( vllm-project#18537 )
Signed-off-by: Huy Do <[email protected]>
* [CI] Enable test_initialization to run on V1 ( vllm-project#16736 )
Signed-off-by: mgoin <[email protected]>
* [Doc] Update references to doc files ( vllm-project#18637 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ModelOpt] Introduce VLLM_MAX_TOKENS_PER_EXPERT_FP4_MOE env var to control blockscale tensor allocation ( vllm-project#18160 )
Signed-off-by: Pavani Majety <[email protected]>
* [Bugfix] Migrate to REGEX Library to prevent catastrophic backtracking ( vllm-project#18454 )
Signed-off-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
* [Bugfix][Nixl] Fix Preemption Bug ( vllm-project#18631 )
Signed-off-by: [email protected] <[email protected]>
* config.py: Clarify that only local GGUF checkpoints are supported. ( vllm-project#18623 )
Signed-off-by: Mathieu Bordere <[email protected]>
* FIX MOE issue in AutoRound format ( vllm-project#18586 )
Signed-off-by: wenhuach21 <[email protected]>
* [V1][Spec Decode] Small refactors to improve eagle bookkeeping performance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
* [Frontend] improve vllm serve --help display ( vllm-project#18643 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Model] Add support for Qwen2.5-Omni-7B-AWQ (Qwen2_5OmniForConditionalGeneration) ( vllm-project#18647 )
* [V1][Spec Decode] Support multi-layer eagle draft model ( vllm-project#18030 )
Signed-off-by: qizixi <[email protected]>
* [Doc] Update README links, mark external links ( vllm-project#18635 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC][pre-commit] Add pre-commit check for triton import ( vllm-project#17716 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Doc] Fix indentation problems in V0 Paged Attention docs ( vllm-project#18659 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add community links ( vllm-project#18657 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] use AutoWeightsLoader for gpt2 ( vllm-project#18625 )
Signed-off-by: zt2370 <[email protected]>
* [Doc] Reorganize user guide ( vllm-project#18661 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] `chmod +x` to `cleanup_pr_body.sh` ( vllm-project#18650 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC] typo fix and clean import ( vllm-project#18664 )
Signed-off-by: Andy Xie <[email protected]>
* [BugFix] Fix import error for fused_moe ( vllm-project#18642 )
Signed-off-by: wangxiyuan <[email protected]>
* [CI] enforce import regex instead of re ( vllm-project#18665 )
Signed-off-by: Aaron Pham <[email protected]>
* fix(regression): clone from reference items ( vllm-project#18662 )
Signed-off-by: Aaron Pham <[email protected]>
* [CI/Build] fix permission denied issue ( vllm-project#18645 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [BugFix][Spec Decode] Improve Prefix Caching Logic in Speculative Decoding ( vllm-project#18668 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix _pickle.PicklingError: Can't pickle <class 'transformers_modules.deepseek-ai.DeepSeek-V2-Lite... ( vllm-project#18640 )
Signed-off-by: Seiji Eicher <[email protected]>
* [MISC] correct signature for LoaderFunction ( vllm-project#18670 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc]Replace `cuda` hard code with `current_platform` in Ray ( vllm-project#14668 )
Signed-off-by: noemotiovon <[email protected]>
* [Misc][ModelScope] Change to use runtime VLLM_USE_MODELSCOPE ( vllm-project#18655 )
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM] Initialize video input support for InternVL models ( vllm-project#18499 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* Speed up the `kernels/quantization/` tests ( vllm-project#18669 )
Signed-off-by: mgoin <[email protected]>
* [BUGFIX] catch subclass first for try...except ( vllm-project#18672 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc] Reduce logs on startup ( vllm-project#18649 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] fix broken links ( vllm-project#18671 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [doc] improve readability ( vllm-project#18675 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix cpu usage and cache hit stats reporting on cpu environment ( vllm-project#18674 )
Signed-off-by: zzzyq <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [CI/build] fix no regex ( vllm-project#18676 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] small improve ( vllm-project#18680 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix profiling dummy data for Pixtral ( vllm-project#18677 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Core][Multimodal] Convert PIL Image to array without data copy when hashing ( vllm-project#18682 )
Signed-off-by: Lukas Geiger <[email protected]>
* [CI/Build][Doc] Update `gte-Qwen2-1.5B-instruct` usage ( vllm-project#18683 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Misc] Fixed the abnormally high TTFT issue in the PD disaggregation example ( vllm-project#18644 )
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
* refactor: simplify request handler, use positive condition check for handler assignment ( vllm-project#18690 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Fix the lm_head in gpt_bigcode in lora mode ( vllm-project#6357 )
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
* [CI] add missing argument ( vllm-project#18694 )
Signed-off-by: Andy Xie <[email protected]>
* [GH] Add issue template for reporting CI failures ( vllm-project#18696 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix issue template format ( vllm-project#18699 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Mistral-format models with sliding window ( vllm-project#18693 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Replace `math.isclose` with `pytest.approx` ( vllm-project#18703 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] fix dump_input for str type ( vllm-project#18697 )
Signed-off-by: Andy Xie <[email protected]>
* [Model] Add support for YARN in NemotronNAS models ( vllm-project#18427 )
Signed-off-by: Nave Assaf <[email protected]>
* [CI/Build] Split pooling and generation extended language models tests in CI ( vllm-project#18705 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Add tensor parallel size = 2 test to HPU CI ( vllm-project#18709 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Misc] add AutoGen integration ( vllm-project#18712 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix]: handle hf-xet CAS error when loading Qwen3 weights in vLLM ( vllm-project#18701 )
* [Doc] Improve API docs ( vllm-project#18713 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples and further reorganize user guide ( vllm-project#18666 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Llama GGUF initialization ( vllm-project#18717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs ( vllm-project#18608 )
* Convert `examples` to `ruff-format` ( vllm-project#18400 )
Signed-off-by: Harry Mellor <[email protected]>
* [Model][Gemma3] Simplify image input validation ( vllm-project#18710 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Misc] improve web section group title display ( vllm-project#18684 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [V1][Quantization] Add CUDA graph compatible v1 GGUF support ( vllm-project#18646 )
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
* [Model][Gemma3] Cast image pixel values already on CPU ( vllm-project#18732 )
Signed-off-by: Lukas Geiger <[email protected]>
* [FEAT] [ROCm] Upgrade AITER Fused MoE kernels. ( vllm-project#18271 )
Signed-off-by: vllmellm <[email protected]>
* [Doc] Update OOT model docs ( vllm-project#18742 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Update reproducibility doc and example ( vllm-project#18741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] improve docs ( vllm-project#18734 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* feat(rocm-support): support mamba2 on rocm ( vllm-project#18565 )
Signed-off-by: Islam Almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Fix multiple containers using the same name in run-hpu-test.sh ( vllm-project#18752 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Doc] cleanup deprecated flag for doc ( vllm-project#18715 )
Signed-off-by: calvin chen <[email protected]>
* Minor fix about MooncakeStoreConnector ( vllm-project#18721 )
Signed-off-by: baoloongmao <[email protected]>
* [Build] fix cpu build missing libtbbmalloc.so ( vllm-project#18744 )
Signed-off-by: Kebe <[email protected]>
* [BUG FIX] minicpm ( vllm-project#18739 )
Signed-off-by: huangyuxiang03 <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
* [Doc] Convert Sphinx directives ( `{class}`, `{meth}`, `{attr}`, ...) to MkDocs format for better documentation linking ( vllm-project#18663 )
Signed-off-by: Zerohertz <[email protected]>
* [CI/Build] Remove imports of built-in `re` ( vllm-project#18750 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Metrics] Add API for accessing in-memory Prometheus metrics ( vllm-project#17010 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Disable prefix cache by default for benchmark ( vllm-project#18639 )
Signed-off-by: cascade812 <[email protected]>
* optimize get_kv_cache_torch_dtype ( vllm-project#18531 )
Signed-off-by: idellzheng <[email protected]>
* [Core] Automatically cast multi-modal input dtype ( vllm-project#18756 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Mistral tool calling when content is list ( vllm-project#18729 )
Signed-off-by: mgoin <[email protected]>
---------
Signed-off-by: Satyajith Chilappagari <[email protected]>
Signed-off-by: Lucia Fang <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Nan2018 <[email protected]>
Signed-off-by: rand-fly <[email protected]>
Signed-off-by: reidliu41 <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: 汪志鹏 <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: calvin chen <[email protected]>
Signed-off-by: haochengxia <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wwl2755 <[email protected]>
Signed-off-by: nicklucche <[email protected]>
Signed-off-by: Kebe <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: rabi <[email protected]>
Signed-off-by: dhia.rhaiem <[email protected]>
Signed-off-by: giantcroc <[email protected]>
Signed-off-by: Hosang Yoon <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
Signed-off-by: Andy Xie <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: jaycha <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Shane A <[email protected]>
Signed-off-by: Elaine Zhao <[email protected]>
Signed-off-by: Linkun <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: googs1025 <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: jiang.li <[email protected]>
Signed-off-by: Lukas Geiger <[email protected]>
Signed-off-by: David Xia <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kai Wu <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Teruaki Ishizaki <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Ronald Xu <[email protected]>
Signed-off-by: cascade812 <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]>
Signed-off-by: Madeesh Kannan <[email protected]>
Signed-off-by: Kay Yan <[email protected]>
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Tristan Leclercq <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Pavani Majety <[email protected]>
Signed-off-by: Crucifixion-Fxl <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Mathieu Bordere <[email protected]>
Signed-off-by: wenhuach21 <[email protected]>
Signed-off-by: qizixi <[email protected]>
Signed-off-by: zt2370 <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Seiji Eicher <[email protected]>
Signed-off-by: noemotiovon <[email protected]>
Signed-off-by: zzzyq <[email protected]>
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Nave Assaf <[email protected]>
Signed-off-by: Lukasz Durejko <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Islam Almersawi <[email protected]>
Signed-off-by: baoloongmao <[email protected]>
Signed-off-by: huangyuxiang03 <[email protected]>
Signed-off-by: idellzheng <[email protected]>
Co-authored-by: sunyicode0012 <[email protected]>
Co-authored-by: Gong Shufan <[email protected]>
Co-authored-by: Satyajith Chilappagari <[email protected]>
Co-authored-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Nan Qin <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Random Fly <[email protected]>
Co-authored-by: Reid <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: 汪志鹏 <[email protected]>
Co-authored-by: wang.yuqi <[email protected]>
Co-authored-by: 燃 <[email protected]>
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Calvin Chen <[email protected]>
Co-authored-by: Percy <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: wwl2755 <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Kebe <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Rabi Mishra <[email protected]>
Co-authored-by: Dhia Eddine Rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
Co-authored-by: GiantCroc <[email protected]>
Co-authored-by: Hyogeun Oh (오효근) <[email protected]>
Co-authored-by: Hosang <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Ning Xie <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: youngrok cha <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
Co-authored-by: Shane A <[email protected]>
Co-authored-by: aws-elaineyz <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: CYJiang <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Lukas Geiger <[email protected]>
Co-authored-by: David Xia <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ekagra Ranjan <[email protected]>
Co-authored-by: Kai Wu <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Chenheli Hua <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Teruaki Ishizaki <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: RonaldBXu <[email protected]>
Co-authored-by: cascade <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Madeesh Kannan <[email protected]>
Co-authored-by: Kay Yan <[email protected]>
Co-authored-by: Tristan Leclercq <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Huy Do <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: Feng XiaoLong <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Mathieu Borderé <[email protected]>
Co-authored-by: Wenhua Cheng <[email protected]>
Co-authored-by: qizixi <[email protected]>
Co-authored-by: Yuanhao WU <[email protected]>
Co-authored-by: ztang2370 <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: Seiji Eicher <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: AlexZhao <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Naveassaf <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: dylan <[email protected]>
Co-authored-by: almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: maobaolong <[email protected]>
Co-authored-by: Shawn Huang <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
Co-authored-by: chunxiaozheng <[email protected]> minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [V1][Spec Decode] Small refactors to improve eagle bookkeeping perfor… … e2fbc5c …mance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:00
|
e493e48524e9e78ab33eafec6461b3940e361189
|
https://github.com/vllm-project/vllm/pull/17731
| false | true | true | true |
PERF: TTFT, profile, profiling | SERVING: vllm serve, serve, Frontend | TEST: test, test, Test
|
Copy link Contributor shadeMe commented May 6, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . ParallelSampleSequenceGroup.add_request has to copy the original SamplingParams instance as many times as the number of requested samples. This is currently done with a copy.deepcopy call, which is not advisable as the the logits_processors field could contain arbitrary Python objects with expensive-to-copy state. This happens to be the case with the current guided decoding logits processors, scaling linearly with the value of SamplingParams.n and introducing a bottleneck in the hot path. A similar issue was previous identified, and SamplingParams.clone was introduced to workaround this issue - it attempts to call a clone function on each logits processor object, with the assumption that classes can implement this method to minimize the overhead by performing shallow copies when possible. However, not all existing logits processors implement this method. Nor does the ParallelSampleSequenceGroup class avail itself of the SamplingParams.clone method. This commit introduces the following changes: Modify ParallelSampleSequenceGroup.add_request to call SamplingParams.clone instead of copy.deepcopy . Update the logits processors of the guidance , outlines and xgrammar backends to expose a clone method for the efficient copying of mutable state. The lm-format-enforcer backend was left untouched as the logits processor implementation is external to vLLM. Benchmark For text generation w/t Nvidia L4, Phi-1.5, n=3 in an async setup, we see the ParallelSampleSequenceGroup.add_request call dominating the runtime during a 180 second profile (after warm-up/with in-flight requests) of the original code (anywhere b'ween 60%-86% of the total runtime depending on the backend). With the above changes, this is essentially eliminated (0.01%-0.6%). Guidance Outlines Xgrammar Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 Xarbirus, chaunceyjiang, and dtransposed reacted with thumbs up emoji 🎉 1 bi1101 reacted with hooray emoji All reactions 👍 3 reactions 🎉 1 reaction [Bugfix] Fix parallel sampling performance regression when guided dec… … 59f7675 …oding is enabled
`ParallelSampleSequenceGroup.add_request` has to copy the original `SamplingParams` instance as
many times as the number of requested samples. This is currently done with a `copy.deepcopy` call,
which is not advisable as the the `logits_processors` field could contain arbitrary Python objects
with expensive-to-copy state. This happens to be the case with the current guided decoding logits
processors, scaling linearly with the value of `SamplingParams.n` and introducing a bottleneck in the
hot path.
A similar issue was previous identified, and `SamplingParams.clone` was introduced to workaround this
issue - it attempts to call a `clone` function on each logits processor object, with the assumption that
classes can implement this method to minimize the overhead by performing shallow copies when possible.
However, not all existing logits processors implement this method. Nor does the `ParallelSampleSequenceGroup`
class avail itself of the `SamplingParams.clone` method.
This commit introduces the following changes:
* Modify `ParallelSampleSequenceGroup.add_request` to call `SamplingParams.clone` instead of `copy.deepcopy`.
* Update the logits processors of the `guidance`, `outlines` and `xgrammar` backends to expose a `clone` method
for the efficient copying of mutable state.
The `lm-format-enforcer` backend was left untouched as the logits processor implementation is external to
vLLM.
Signed-off-by: Madeesh Kannan <[email protected]> shadeMe requested review from mgoin and russellb as code owners May 6, 2025 16:26 Copy link github-actions bot commented May 6, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the structured-output label May 6, 2025 github-project-automation bot added this to Structured Output May 6, 2025 njhill added
the v0 label May 6, 2025 Copy link Contributor chaunceyjiang commented May 7, 2025 @shadeMe Hi, sorry for the off-topic question—how did you generate this performance chart? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mypy fixes … ded6280 Signed-off-by: Madeesh Kannan <[email protected]> shadeMe force-pushed the v0/fix/logitsprocessor-parallel-sampling-guided-decoding-deepcopy branch
from 15e45cf to ded6280 Compare May 7, 2025 09:17 Copy link Contributor Author shadeMe commented May 7, 2025 @shadeMe Hi, sorry for the off-topic question—how did you generate this performance chart? It's with the speedscope tool. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor dtransposed commented May 9, 2025 Overlap with #16349 just FYI All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented May 12, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @shadeMe . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label May 12, 2025 Merge branch 'main' into v0/fix/logitsprocessor-parallel-sampling-gui… … 6172283 …ded-decoding-deepcopy
Signed-off-by: Madeesh Kannan <[email protected]> mergify bot removed
the needs-rebase label May 13, 2025 mgoin approved these changes May 16, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Seems reasonable to me, but would like @russellb or @aarnphm to confirm before merge Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label May 16, 2025 bi1101 mentioned this pull request May 16, 2025 [Bug]:Structured outputs inference often took a very long time,and eventually causing a timeout and vLLM engine crushing. #10081 Open 1 task aarnphm approved these changes May 17, 2025 View reviewed changes Copy link Collaborator aarnphm left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I'm good with this for structured outputs, and good to merge in guidance and xgrammar first before #15975 . iirc we will have to deepcopy the logit processors regardless if users use a custom logit processor? so essentially this change in sequence.py could potentially be breaking for users in V0 engine? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/guided_decoding/outlines_logits_processors.py Comment on lines +59 to +64 def clone(self) -> "BaseLogitsProcessor": cloned = copy.copy(self) cloned._guide = self._guide.copy() cloned._fsm_state = copy.deepcopy(self._fsm_state) return cloned Copy link Collaborator aarnphm May 17, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I would like to get #15975 in first before assigning this private attrs. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 shadeMe reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author shadeMe commented May 19, 2025 iirc we will have to deepcopy the logit processors regardless if users use a custom logit processor? so essentially this change in sequence.py could potentially be breaking for users in V0 engine? Breaking perhaps along the same lines as the original PR that introduced the SamplingParams.clone method - this PR just brings the parallel sampling code inline with its non-parallel counterpart. We could theoretically preserve the existing behaviour while excluding the structured outputs processors, but it would result in leaky abstractions. 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Merge remote-tracking branch 'origin/main' into v0/fix/logitsprocesso… … a17fb77 …r-parallel-sampling-guided-decoding-deepcopy russellb enabled auto-merge (squash) May 19, 2025 14:18 Copy link Member russellb commented May 19, 2025 merged from main to see if that gets CI passing All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author shadeMe commented May 21, 2025 The CI failures appear to be unrelated AFAICT? The failing tests use the default n=1 and do not use structured outputs. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented May 23, 2025 Can you merge from main to fix the CI failures? 👍 1 shadeMe reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Merge branch 'main' into v0/fix/logitsprocessor-parallel-sampling-gui… … b8b3fd7 …ded-decoding-deepcopy Hide details View details vllm-bot merged commit e493e48 into vllm-project : main May 23, 2025 53 of 58 checks passed Uh oh! There was an error while loading. Please reload this page . github-project-automation bot moved this to Done in Structured Output May 23, 2025 bi1101 mentioned this pull request May 23, 2025 [Usage]: Regex Structured Output Became Very Slow #18546 Open 1 task zzzyq pushed a commit
to zzzyq/vllm
that referenced
this pull request May 24, 2025 [V0][Bugfix] Fix parallel sampling performance regression when guided… … 3b77312 … decoding is enabled ( vllm-project#17731 )
Signed-off-by: Madeesh Kannan <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]> Copy link Member DarkLight1337 commented May 24, 2025 It appears that the samplers test failure on main is caused by this PR. PTAL https://buildkite.com/vllm/ci/builds/20641/steps?jid=0196fcb9-d7f7-4ff4-ad54-260dfc784dae All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 mentioned this pull request May 24, 2025 [Bug][Failing Test]: Samplers Test - samplers/test_seeded_generate.py #18656 Closed 1 task Copy link Collaborator aarnphm commented May 24, 2025 This might have to do with deepcopy 🤔 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gshtras added a commit
to ROCm/vllm
that referenced
this pull request May 27, 2025 Upstream merge 2025 05 27 ( #557 ) … 1900335 * Add files via uploadAdd fused MoE kernel tuning configs (fp8_w8a8) for DeepSeek V3/R1 on a single-node 8x NVIDIA H20 96GB setup ( vllm-project#18337 )
* [Misc] Fix typo ( vllm-project#18330 )
* Neuron up mistral ( vllm-project#18222 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* fix CUDA_check redefinition in vllm-project#17918 ( vllm-project#18287 )
Signed-off-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
* [neuron] fix authorization issue ( vllm-project#18364 )
Signed-off-by: Liangfu Chen <[email protected]>
* [Misc] Allow `AutoWeightsLoader` to skip loading weights with specific substr in name ( vllm-project#18358 )
Signed-off-by: Isotr0py <[email protected]>
* [Core] [Bugfix]: tensor parallel with prompt embeds ( vllm-project#18171 )
Signed-off-by: Nan2018 <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
* [release] Change dockerhub username for TPU release ( vllm-project#18389 )
* [Bugfix] fix adding bias twice in ipex GPTQ quantization ( vllm-project#18363 )
Signed-off-by: rand-fly <[email protected]>
* [doc] update env variable export ( vllm-project#18391 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Add LoRA code owner ( vllm-project#18387 )
Signed-off-by: Jee Jee Li <[email protected]>
* Update cpu.txt ( vllm-project#18398 )
Signed-off-by: 汪志鹏 <[email protected]>
* [CI] Add mteb testing to test the accuracy of the embedding model ( vllm-project#17175 )
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )
Co-authored-by: 松灵 <[email protected]>
* [Misc] refactor prompt embedding examples ( vllm-project#18405 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Minor] Rename quantization nvfp4 to modelopt_fp4 ( vllm-project#18356 )
Signed-off-by: mgoin <[email protected]>
* [Model] use AutoWeightsLoader for bloom ( vllm-project#18300 )
Signed-off-by: calvin chen <[email protected]>
* [Kernel] update comment for KV shape in unified triton attn ( vllm-project#18099 )
Signed-off-by: haochengxia <[email protected]>
* fix:Build torch wheel inline rather than picking from nightly ( vllm-project#18351 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
* [TPU] Re-enable the Pallas MoE kernel ( vllm-project#18025 )
Signed-off-by: Michael Goin <[email protected]>
* [Bugfix] config.head_dim is now explicitly set to None ( vllm-project#18432 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [Bug] Fix moe_sum signature ( vllm-project#18440 )
Signed-off-by: Bill Nell <[email protected]>
* Revert "[Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18407 )" ( vllm-project#18456 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Failing Test] Fix nixl connector test when promt size < block size ( vllm-project#18429 )
Signed-off-by: wwl2755 <[email protected]>
* [Misc] MultiConnector._connectors type ( vllm-project#18423 )
Signed-off-by: nicklucche <[email protected]>
* [Frontend] deprecate `--device` arg ( vllm-project#18399 )
Signed-off-by: Kebe <[email protected]>
* [V1] Fix general plugins not loaded in engine for multiproc ( vllm-project#18326 )
Signed-off-by: Yong Hoon Shin <[email protected]>
* [Misc] refactor disaggregated-prefill-v1 example ( vllm-project#18474 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Failing Test] Fix test_events.py ( vllm-project#18460 )
Signed-off-by: rabi <[email protected]>
* [MODEL] FalconH1 ( vllm-project#18406 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [Doc] fix arg docstring in linear layers ( vllm-project#18410 )
Signed-off-by: giantcroc <[email protected]>
* [Bugfix] Reduce moe_sum test size to avoid OOM ( vllm-project#18484 )
Signed-off-by: Bill Nell <[email protected]>
* [Build] fix Dockerfile shell ( vllm-project#18402 )
* [Misc] Update deprecation message for `--enable-reasoning` ( vllm-project#18404 )
* [ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 ( vllm-project#17004 )
Signed-off-by: Hosang Yoon <[email protected]>
* Remove incorrect env value
* Revert "[v1] Support multiple KV cache groups in GPU model runner ( vllm-project#17945 ) ( vllm-project#18459 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [FEAT][ROCm] Upgrade AITER MLA v1 backend ( vllm-project#18338 )
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
* [Bugfix] Consistent ascii handling in tool parsers ( vllm-project#17704 )
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
* [FalconH1] Fix output dtype in RMSNorm fallback path for Falcon-H1 (e.g. 0.5B) ( vllm-project#18500 )
Signed-off-by: dhia.rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
* [MISC] update project urls in pyproject.toml ( vllm-project#18519 )
Signed-off-by: Andy Xie <[email protected]>
* [CI] Fix race condition with StatelessProcessGroup.barrier ( vllm-project#18506 )
Signed-off-by: Russell Bryant <[email protected]>
* Intialize io_thread_pool attribute in the beginning. ( vllm-project#18331 )
Signed-off-by: rabi <[email protected]>
* [Bugfix] Inconsistent token calculation compared to HF in llava family ( vllm-project#18479 )
Signed-off-by: jaycha <[email protected]>
* [BugFix][DP] Send DP wave completion only from `dp_rank==0` ( vllm-project#18502 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
* [Bugfix][Model] Make Olmo2Model weight loading return loaded weights ( vllm-project#18504 )
Signed-off-by: Shane A <[email protected]>
* [Bugfix] Fix LoRA test ( vllm-project#18518 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Doc] Fix invalid JSON in example args ( vllm-project#18527 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Neuron] Update Dockerfile.neuron to use latest neuron release (2.23) ( vllm-project#18512 )
Signed-off-by: Satyajith Chilappagari <[email protected]>
* Update default neuron config for speculation ( vllm-project#18274 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* Order sequence ids + config update to support specifying custom quantization layers ( vllm-project#18279 )
Signed-off-by: Elaine Zhao <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text ( vllm-project#18526 )
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Bugfix] Add kwargs to RequestOutput __init__ to be forward compatible ( vllm-project#18513 )
Signed-off-by: Linkun <[email protected]>
* [CI/Build] Update bamba test model location ( vllm-project#18544 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc] Support --stream arg in openai_completion_client.py script ( vllm-project#18388 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Use random hidden states in dummy sampler run ( vllm-project#18543 )
Signed-off-by: Bowen Wang <[email protected]>
* [Doc] Add stream flag for chat completion example ( vllm-project#18524 )
Signed-off-by: calvin chen <[email protected]>
* [BugFix][CPU] Fix x86 SHM distributed module initialization ( vllm-project#18536 )
Signed-off-by: jiang.li <[email protected]>
* [Misc] improve Automatic Prefix Caching example ( vllm-project#18554 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] Call `ndarray.tobytes()` directly instead of `ndarray.data.tobytes()` ( vllm-project#18347 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Bugfix] make `test_openai_schema.py` pass ( vllm-project#18224 )
Signed-off-by: David Xia <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [Platform] Move platform check to right place ( vllm-project#18470 )
Signed-off-by: wangxiyuan <[email protected]>
* [Compile][Platform] Make PiecewiseBackend pluggable and extendable ( vllm-project#18076 )
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Build/CI] Fix CUDA 11.8 build ( vllm-project#17679 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
* [Tool] Add NIXL installation script ( vllm-project#18172 )
Signed-off-by: Linkun <[email protected]>
* [V1][Spec Decode][Bugfix] Load quantize weights for EAGLE ( vllm-project#18290 )
* [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser ( vllm-project#17917 )
Signed-off-by: Kai Wu <[email protected]>
* [Frontend] [Core] Add Tensorizer support for V1, LoRA adapter serialization and deserialization ( vllm-project#17926 )
Signed-off-by: Sanger Steel <[email protected]>
* [AMD] [P/D] Compute num gpus for ROCm correctly in run_accuracy_test.sh ( vllm-project#18568 )
Signed-off-by: Randall Smith <[email protected]>
* Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. ( vllm-project#18569 )
Signed-off-by: Chenheli Hua <[email protected]>
* [V1][Spec Decoding] Use model_loader.get_model() to load models ( vllm-project#18273 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Enable hybrid attention models for Transformers backend ( vllm-project#18494 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] refactor: simplify input validation and num_requests handling in _convert_v1_inputs ( vllm-project#18482 )
Signed-off-by: googs1025 <[email protected]>
* [BugFix] Increase TP execute_model timeout ( vllm-project#18558 )
Signed-off-by: Nick Hill <[email protected]>
* [Bugfix] Set `KVTransferConfig.engine_id` in post_init ( vllm-project#18576 )
Signed-off-by: Linkun Chen <[email protected]>
* [Spec Decode] Make EAGLE3 draft token ID mapping optional ( vllm-project#18488 )
Signed-off-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
* [Neuron] Remove bypass on EAGLEConfig and add a test ( vllm-project#18514 )
Signed-off-by: Elaine Zhao <[email protected]>
* [Bugfix][Benchmarks] Fix a benchmark of deepspeed-mii backend to use api_key ( vllm-project#17291 )
Signed-off-by: Teruaki Ishizaki <[email protected]>
* [Misc] Replace `cuda` hard code with `current_platform` ( vllm-project#16983 )
Signed-off-by: shen-shanshan <[email protected]>
* [Hardware] correct method signatures for HPU,ROCm,XPU ( vllm-project#18551 )
Signed-off-by: Andy Xie <[email protected]>
* [V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )
Signed-off-by: Ronald Xu <[email protected]>
* [Feature]Add async tensor parallelism using compilation pass ( vllm-project#17882 )
Signed-off-by: cascade812 <[email protected]>
* [Doc] Update quickstart and install for cu128 using `--torch-backend=auto` ( vllm-project#18505 )
Signed-off-by: mgoin <[email protected]>
* [Feature][V1]: suupports cached_tokens in response usage ( vllm-project#18149 )
Co-authored-by: simon-mo <[email protected]>
* [Bugfix] Add half type support in reshape_and_cache_cpu_impl on x86 cpu platform ( vllm-project#18430 )
Signed-off-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
* Migrate docs from Sphinx to MkDocs ( vllm-project#18145 )
Signed-off-by: Harry Mellor <[email protected]>
* Revert "[V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal ( vllm-project#18034 )" ( vllm-project#18600 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix][Model] Fix baichuan model loader for tp ( vllm-project#18597 )
Signed-off-by: Mengqing Cao <[email protected]>
* [V0][Bugfix] Fix parallel sampling performance regression when guided decoding is enabled ( vllm-project#17731 )
Signed-off-by: Madeesh Kannan <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
* Add myself as docs code owner ( vllm-project#18605 )
Signed-off-by: Harry Mellor <[email protected]>
* [Hardware][CPU] Update intel_extension_for_pytorch 2.7.0 and move to `requirements/cpu.txt` ( vllm-project#18542 )
Signed-off-by: Kay Yan <[email protected]>
* [CI] fix kv_cache_type argument ( vllm-project#18594 )
Signed-off-by: Andy Xie <[email protected]>
* [Doc] Fix indent of contributing to vllm ( vllm-project#18611 )
Signed-off-by: Zerohertz <[email protected]>
* Replace `{func}` with mkdocs style links ( vllm-project#18610 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Fix V1 flag being set in entrypoints tests ( vllm-project#18598 )
Signed-off-by: DarkLight1337 <[email protected]>
* Fix examples with code blocks in docs ( vllm-project#18609 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Fix transformers model impl ignored for mixtral quant ( vllm-project#18602 )
Signed-off-by: Tristan Leclercq <[email protected]>
* Include private attributes in API documentation ( vllm-project#18614 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] add Haystack integration ( vllm-project#18601 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix][Build/CI] Fixup CUDA compiler version check for CUDA_SUPPORTED_ARCHS ( vllm-project#18579 )
* [Doc] Fix markdown list indentation for MkDocs rendering ( vllm-project#18620 )
Signed-off-by: Zerohertz <[email protected]>
* [Doc] Use a different color for the announcement ( vllm-project#18616 )
Signed-off-by: DarkLight1337 <[email protected]>
* Refactor pplx init logic to make it modular (prepare for deepep) ( vllm-project#18200 )
Signed-off-by: youkaichao <[email protected]>
* Fix figures in design doc ( vllm-project#18612 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Change mkdocs to not use directory urls ( vllm-project#18622 )
Signed-off-by: mgoin <[email protected]>
* [v1] Redo "Support multiple KV cache groups in GPU model runner ( vllm-project#17945 )" ( vllm-project#18593 )
Signed-off-by: Chen Zhang <[email protected]>
* [Doc] fix list formatting ( vllm-project#18624 )
Signed-off-by: David Xia <[email protected]>
* [Doc] Fix top-level API links/docs ( vllm-project#18621 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Avoid documenting dynamic / internal modules ( vllm-project#18626 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix broken links and unlinked docs, add shortcuts to home sidebar ( vllm-project#18627 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support Deepseek MTP ( vllm-project#18435 )
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
* Use prebuilt FlashInfer x86_64 PyTorch 2.7 CUDA 12.8 wheel for CI ( vllm-project#18537 )
Signed-off-by: Huy Do <[email protected]>
* [CI] Enable test_initialization to run on V1 ( vllm-project#16736 )
Signed-off-by: mgoin <[email protected]>
* [Doc] Update references to doc files ( vllm-project#18637 )
Signed-off-by: DarkLight1337 <[email protected]>
* [ModelOpt] Introduce VLLM_MAX_TOKENS_PER_EXPERT_FP4_MOE env var to control blockscale tensor allocation ( vllm-project#18160 )
Signed-off-by: Pavani Majety <[email protected]>
* [Bugfix] Migrate to REGEX Library to prevent catastrophic backtracking ( vllm-project#18454 )
Signed-off-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
* [Bugfix][Nixl] Fix Preemption Bug ( vllm-project#18631 )
Signed-off-by: [email protected] <[email protected]>
* config.py: Clarify that only local GGUF checkpoints are supported. ( vllm-project#18623 )
Signed-off-by: Mathieu Bordere <[email protected]>
* FIX MOE issue in AutoRound format ( vllm-project#18586 )
Signed-off-by: wenhuach21 <[email protected]>
* [V1][Spec Decode] Small refactors to improve eagle bookkeeping performance ( vllm-project#18424 )
Signed-off-by: qizixi <[email protected]>
* [Frontend] improve vllm serve --help display ( vllm-project#18643 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Model] Add support for Qwen2.5-Omni-7B-AWQ (Qwen2_5OmniForConditionalGeneration) ( vllm-project#18647 )
* [V1][Spec Decode] Support multi-layer eagle draft model ( vllm-project#18030 )
Signed-off-by: qizixi <[email protected]>
* [Doc] Update README links, mark external links ( vllm-project#18635 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC][pre-commit] Add pre-commit check for triton import ( vllm-project#17716 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Doc] Fix indentation problems in V0 Paged Attention docs ( vllm-project#18659 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Add community links ( vllm-project#18657 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Model] use AutoWeightsLoader for gpt2 ( vllm-project#18625 )
Signed-off-by: zt2370 <[email protected]>
* [Doc] Reorganize user guide ( vllm-project#18661 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] `chmod +x` to `cleanup_pr_body.sh` ( vllm-project#18650 )
Signed-off-by: DarkLight1337 <[email protected]>
* [MISC] typo fix and clean import ( vllm-project#18664 )
Signed-off-by: Andy Xie <[email protected]>
* [BugFix] Fix import error for fused_moe ( vllm-project#18642 )
Signed-off-by: wangxiyuan <[email protected]>
* [CI] enforce import regex instead of re ( vllm-project#18665 )
Signed-off-by: Aaron Pham <[email protected]>
* fix(regression): clone from reference items ( vllm-project#18662 )
Signed-off-by: Aaron Pham <[email protected]>
* [CI/Build] fix permission denied issue ( vllm-project#18645 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [BugFix][Spec Decode] Improve Prefix Caching Logic in Speculative Decoding ( vllm-project#18668 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Fix _pickle.PicklingError: Can't pickle <class 'transformers_modules.deepseek-ai.DeepSeek-V2-Lite... ( vllm-project#18640 )
Signed-off-by: Seiji Eicher <[email protected]>
* [MISC] correct signature for LoaderFunction ( vllm-project#18670 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc]Replace `cuda` hard code with `current_platform` in Ray ( vllm-project#14668 )
Signed-off-by: noemotiovon <[email protected]>
* [Misc][ModelScope] Change to use runtime VLLM_USE_MODELSCOPE ( vllm-project#18655 )
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [VLM] Initialize video input support for InternVL models ( vllm-project#18499 )
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* Speed up the `kernels/quantization/` tests ( vllm-project#18669 )
Signed-off-by: mgoin <[email protected]>
* [BUGFIX] catch subclass first for try...except ( vllm-project#18672 )
Signed-off-by: Andy Xie <[email protected]>
* [Misc] Reduce logs on startup ( vllm-project#18649 )
Signed-off-by: DarkLight1337 <[email protected]>
* [doc] fix broken links ( vllm-project#18671 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [doc] improve readability ( vllm-project#18675 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix cpu usage and cache hit stats reporting on cpu environment ( vllm-project#18674 )
Signed-off-by: zzzyq <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [CI/build] fix no regex ( vllm-project#18676 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Misc] small improve ( vllm-project#18680 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Fix profiling dummy data for Pixtral ( vllm-project#18677 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Core][Multimodal] Convert PIL Image to array without data copy when hashing ( vllm-project#18682 )
Signed-off-by: Lukas Geiger <[email protected]>
* [CI/Build][Doc] Update `gte-Qwen2-1.5B-instruct` usage ( vllm-project#18683 )
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Misc] Fixed the abnormally high TTFT issue in the PD disaggregation example ( vllm-project#18644 )
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
* refactor: simplify request handler, use positive condition check for handler assignment ( vllm-project#18690 )
Signed-off-by: googs1025 <[email protected]>
* [Bugfix] Fix the lm_head in gpt_bigcode in lora mode ( vllm-project#6357 )
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
* [CI] add missing argument ( vllm-project#18694 )
Signed-off-by: Andy Xie <[email protected]>
* [GH] Add issue template for reporting CI failures ( vllm-project#18696 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Fix issue template format ( vllm-project#18699 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Mistral-format models with sliding window ( vllm-project#18693 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI/Build] Replace `math.isclose` with `pytest.approx` ( vllm-project#18703 )
Signed-off-by: DarkLight1337 <[email protected]>
* [CI] fix dump_input for str type ( vllm-project#18697 )
Signed-off-by: Andy Xie <[email protected]>
* [Model] Add support for YARN in NemotronNAS models ( vllm-project#18427 )
Signed-off-by: Nave Assaf <[email protected]>
* [CI/Build] Split pooling and generation extended language models tests in CI ( vllm-project#18705 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Add tensor parallel size = 2 test to HPU CI ( vllm-project#18709 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Misc] add AutoGen integration ( vllm-project#18712 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Bugfix]: handle hf-xet CAS error when loading Qwen3 weights in vLLM ( vllm-project#18701 )
* [Doc] Improve API docs ( vllm-project#18713 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Move examples and further reorganize user guide ( vllm-project#18666 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Llama GGUF initialization ( vllm-project#18717 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs ( vllm-project#18608 )
* Convert `examples` to `ruff-format` ( vllm-project#18400 )
Signed-off-by: Harry Mellor <[email protected]>
* [Model][Gemma3] Simplify image input validation ( vllm-project#18710 )
Signed-off-by: Lukas Geiger <[email protected]>
* [Misc] improve web section group title display ( vllm-project#18684 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [V1][Quantization] Add CUDA graph compatible v1 GGUF support ( vllm-project#18646 )
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
* [Model][Gemma3] Cast image pixel values already on CPU ( vllm-project#18732 )
Signed-off-by: Lukas Geiger <[email protected]>
* [FEAT] [ROCm] Upgrade AITER Fused MoE kernels. ( vllm-project#18271 )
Signed-off-by: vllmellm <[email protected]>
* [Doc] Update OOT model docs ( vllm-project#18742 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Doc] Update reproducibility doc and example ( vllm-project#18741 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] improve docs ( vllm-project#18734 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* feat(rocm-support): support mamba2 on rocm ( vllm-project#18565 )
Signed-off-by: Islam Almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
* [Hardware][Intel-Gaudi] [CI/Build] Fix multiple containers using the same name in run-hpu-test.sh ( vllm-project#18752 )
Signed-off-by: Lukasz Durejko <[email protected]>
* [Doc] cleanup deprecated flag for doc ( vllm-project#18715 )
Signed-off-by: calvin chen <[email protected]>
* Minor fix about MooncakeStoreConnector ( vllm-project#18721 )
Signed-off-by: baoloongmao <[email protected]>
* [Build] fix cpu build missing libtbbmalloc.so ( vllm-project#18744 )
Signed-off-by: Kebe <[email protected]>
* [BUG FIX] minicpm ( vllm-project#18739 )
Signed-off-by: huangyuxiang03 <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
* [Doc] Convert Sphinx directives ( `{class}`, `{meth}`, `{attr}`, ...) to MkDocs format for better documentation linking ( vllm-project#18663 )
Signed-off-by: Zerohertz <[email protected]>
* [CI/Build] Remove imports of built-in `re` ( vllm-project#18750 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1][Metrics] Add API for accessing in-memory Prometheus metrics ( vllm-project#17010 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Disable prefix cache by default for benchmark ( vllm-project#18639 )
Signed-off-by: cascade812 <[email protected]>
* optimize get_kv_cache_torch_dtype ( vllm-project#18531 )
Signed-off-by: idellzheng <[email protected]>
* [Core] Automatically cast multi-modal input dtype ( vllm-project#18756 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Mistral tool calling when content is list ( vllm-project#18729 )
Signed-off-by: mgoin <[email protected]>
---------
Signed-off-by: Satyajith Chilappagari <[email protected]>
Signed-off-by: Lucia Fang <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Nan2018 <[email protected]>
Signed-off-by: rand-fly <[email protected]>
Signed-off-by: reidliu41 <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: 汪志鹏 <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: calvin chen <[email protected]>
Signed-off-by: haochengxia <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wwl2755 <[email protected]>
Signed-off-by: nicklucche <[email protected]>
Signed-off-by: Kebe <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: rabi <[email protected]>
Signed-off-by: dhia.rhaiem <[email protected]>
Signed-off-by: giantcroc <[email protected]>
Signed-off-by: Hosang Yoon <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: Sebastian Schönnenbeck <[email protected]>
Signed-off-by: Andy Xie <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: jaycha <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Shane A <[email protected]>
Signed-off-by: Elaine Zhao <[email protected]>
Signed-off-by: Linkun <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: googs1025 <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: jiang.li <[email protected]>
Signed-off-by: Lukas Geiger <[email protected]>
Signed-off-by: David Xia <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kai Wu <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Teruaki Ishizaki <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Ronald Xu <[email protected]>
Signed-off-by: cascade812 <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]>
Signed-off-by: Madeesh Kannan <[email protected]>
Signed-off-by: Kay Yan <[email protected]>
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Tristan Leclercq <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Pavani Majety <[email protected]>
Signed-off-by: Crucifixion-Fxl <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Mathieu Bordere <[email protected]>
Signed-off-by: wenhuach21 <[email protected]>
Signed-off-by: qizixi <[email protected]>
Signed-off-by: zt2370 <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Seiji Eicher <[email protected]>
Signed-off-by: noemotiovon <[email protected]>
Signed-off-by: zzzyq <[email protected]>
Signed-off-by: zhaohaidao <[email protected]>
Signed-off-by: zhaohaiyuan <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Nave Assaf <[email protected]>
Signed-off-by: Lukasz Durejko <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Islam Almersawi <[email protected]>
Signed-off-by: baoloongmao <[email protected]>
Signed-off-by: huangyuxiang03 <[email protected]>
Signed-off-by: idellzheng <[email protected]>
Co-authored-by: sunyicode0012 <[email protected]>
Co-authored-by: Gong Shufan <[email protected]>
Co-authored-by: Satyajith Chilappagari <[email protected]>
Co-authored-by: Lucia Fang <[email protected]>
Co-authored-by: Lucia (Lu) Fang <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Nan Qin <[email protected]>
Co-authored-by: Andrew Sansom <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Random Fly <[email protected]>
Co-authored-by: Reid <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: 汪志鹏 <[email protected]>
Co-authored-by: wang.yuqi <[email protected]>
Co-authored-by: 燃 <[email protected]>
Co-authored-by: 松灵 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Calvin Chen <[email protected]>
Co-authored-by: Percy <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: wwl2755 <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Kebe <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Rabi Mishra <[email protected]>
Co-authored-by: Dhia Eddine Rhaiem <[email protected]>
Co-authored-by: younesbelkada <[email protected]>
Co-authored-by: Ilyas Chahed <[email protected]>
Co-authored-by: Jingwei Zuo <[email protected]>
Co-authored-by: GiantCroc <[email protected]>
Co-authored-by: Hyogeun Oh (오효근) <[email protected]>
Co-authored-by: Hosang <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Ning Xie <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: youngrok cha <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
Co-authored-by: Shane A <[email protected]>
Co-authored-by: aws-elaineyz <[email protected]>
Co-authored-by: Shashwat Srijan <[email protected]>
Co-authored-by: Aakash Shetty <[email protected]>
Co-authored-by: Tailin Pan <[email protected]>
Co-authored-by: Rishabh Rajesh <[email protected]>
Co-authored-by: Yishan McNabb <[email protected]>
Co-authored-by: Patrick Lange <[email protected]>
Co-authored-by: Maxwell Goldberg <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: CYJiang <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Lukas Geiger <[email protected]>
Co-authored-by: David Xia <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ekagra Ranjan <[email protected]>
Co-authored-by: Kai Wu <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Chenheli Hua <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Teruaki Ishizaki <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: RonaldBXu <[email protected]>
Co-authored-by: cascade <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Yuqi Zhang <[email protected]>
Co-authored-by: Madeesh Kannan <[email protected]>
Co-authored-by: Kay Yan <[email protected]>
Co-authored-by: Tristan Leclercq <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Huy Do <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: Feng XiaoLong <[email protected]>
Co-authored-by: Crucifixion-Fxl <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Mathieu Borderé <[email protected]>
Co-authored-by: Wenhua Cheng <[email protected]>
Co-authored-by: qizixi <[email protected]>
Co-authored-by: Yuanhao WU <[email protected]>
Co-authored-by: ztang2370 <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: Seiji Eicher <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: AlexZhao <[email protected]>
Co-authored-by: zhaohaiyuan <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Naveassaf <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: dylan <[email protected]>
Co-authored-by: almersawi <[email protected]>
Co-authored-by: Islam Almersawi <[email protected]>
Co-authored-by: Łukasz Durejko <[email protected]>
Co-authored-by: maobaolong <[email protected]>
Co-authored-by: Shawn Huang <[email protected]>
Co-authored-by: huangyuxiang03 <[email protected]>
Co-authored-by: chunxiaozheng <[email protected]> minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [V0][Bugfix] Fix parallel sampling performance regression when guided… … ac503be … decoding is enabled ( vllm-project#17731 )
Signed-off-by: Madeesh Kannan <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:04
|
67da5720d4ed2aa1f615ec812031f4f3753b3f62
|
https://github.com/vllm-project/vllm/pull/17973
| true | true | true | true |
LM_EVAL: lm_eval, lm_eval, lm_eval | PERF: req/s, optimization, optimization | SERVING: vllm serve, serve | TEST: test, test, test
|
Copy link Contributor vadiklyutiy commented May 12, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Description of Problem In Qwen2.5-VL rotary position embedding constant tensors creates in the beginning of model's forward . Before this PR there were a mix of CPU and GPU tensors and (small) data pieces transferred back and forward to device. Profile looked like below pink tmp is begining of Qwen2_5_VisionTransformer.forward() before main transformer started. Solution This PR: makes a refactoring and put all tensors necessary to create constant mrope data to CPU (similar to how it works for mrope for language (part of) models) regroup calculation by grid_thw line and cache results Now profile looks like below Performance results Run Qwen2.5-3B-VL on H100 with following command line vllm serve Qwen/Qwen2.5-VL-3B-Instruct --disable-log-requests --max-num-seqs 1024 --block-size 16 --max-num-batched-tokens 2048 Construction of constant mrope tensors itself speeded up 5+ times . E2E measured with https://github.com/CentML/flexible-inference-bench fib benchmark -rps 50 --input-token-distribution uniform 250 300 --output-token-distribution uniform 150 250 --num-of-imgs-per-req 1 --img-ratios-per-req 512x512 -n 1000 --base-url http://localhost:8000 --endpoint v1/chat/completions --backend openai-chat The above runs 1000 requests, 50 reqs/sec, every request has one 512x512 image. Measured average reqs/s. Made 11 runs and took median Before: 25.99 reqs/s After: 26.63 req/s Speed up: 2.46% Correctness Run lm_eval with chartqa and mmmu lm_eval --model vllm-vlm --model_args "pretrained=Qwen/Qwen2.5-VL-3B-Instruct,model=Qwen/Qwen2.5-VL-3B-Instruct" --tasks mmmu_val,chartqa --batch_size 32 --apply_chat_template Before | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|------|-----:|-----------------|---|-----:|---|-----:|
|chartqa | 0|none | 0|anywhere_accuracy|↑ |0.8072|± |0.0079|
| | |none | 0|exact_match |↑ |0.5712|± |0.0099|
| | |none | 0|relaxed_accuracy |↑ |0.8040|± |0.0079|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.4567|± |0.0159|
| - Art and Design | 0|none | |acc |↑ |0.5583|± |0.0437|
| - Business | 0|none | |acc |↑ |0.3733|± |0.0395|
| - Health and Medicine | 0|none | |acc |↑ |0.5267|± |0.0406|
| - Humanities and Social Science| 0|none | |acc |↑ |0.7000|± |0.0412|
| - Science | 0|none | |acc |↑ |0.3267|± |0.0386|
| - Tech and Engineering | 0|none | |acc |↑ |0.3619|± |0.0326| After | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|------|-----:|-----------------|---|-----:|---|-----:|
|chartqa | 0|none | 0|anywhere_accuracy|↑ |0.8032|± |0.0080|
| | |none | 0|exact_match |↑ |0.5756|± |0.0099|
| | |none | 0|relaxed_accuracy |↑ |0.8016|± |0.0080|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.4544|± |0.0159|
| - Art and Design | 0|none | |acc |↑ |0.5583|± |0.0443|
| - Business | 0|none | |acc |↑ |0.3733|± |0.0395|
| - Health and Medicine | 0|none | |acc |↑ |0.5067|± |0.0407|
| - Humanities and Social Science| 0|none | |acc |↑ |0.7083|± |0.0411|
| - Science | 0|none | |acc |↑ |0.3267|± |0.0386|
| - Tech and Engineering | 0|none | |acc |↑ |0.3619|± |0.0327| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Speed up Qwen2.5-VL model by speed up rotary position embedding const… … 7eec475 … Tensors creation
Signed-off-by: Vadim Gimpelson <[email protected]> Copy link github-actions bot commented May 12, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . pre-commit fixes … ab81b1d Signed-off-by: Vadim Gimpelson <[email protected]> simon-mo approved these changes May 14, 2025 View reviewed changes Copy link Collaborator simon-mo left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment thank you for the optimization, please run a mmmu or chartqa evaluation to verify the correctness of the changes. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 vadiklyutiy reacted with thumbs up emoji All reactions 👍 1 reaction vllm/model_executor/models/qwen2_5_vl.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/models/qwen2_5_vl.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . revome unnecessary coments … 5d0b6e2 Signed-off-by: Vadim Gimpelson <[email protected]> Copy link Contributor Author vadiklyutiy commented May 15, 2025 thank you for the optimization, please run a mmmu or chartqa evaluation to verify the correctness of the changes. I added to description results of mmmu and chartqa "before" and "after" 👍 1 simon-mo reacted with thumbs up emoji 🚀 1 simon-mo reacted with rocket emoji All reactions 👍 1 reaction 🚀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . simon-mo enabled auto-merge (squash) May 15, 2025 01:10 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label May 15, 2025 WoosukKwon disabled auto-merge May 15, 2025 01:37 Copy link Collaborator WoosukKwon commented May 15, 2025 @imkero Could you please take a final look? I'm not sure if this overlaps with #14684 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented May 15, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @vadiklyutiy . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label May 15, 2025 Merge branch 'main' into rope-const-creation-speedup 20808fa mergify bot removed
the needs-rebase label May 16, 2025 Copy link Collaborator WoosukKwon commented May 16, 2025 @vadiklyutiy QQ: Why does this PR change the accuracy (though the diff is small)? I thought the PR doesn't change the computation at all. Can we somehow strictly match the accuracy? I'm a bit careful about this because we've seen a few bugs regarding m-rope. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator simon-mo commented May 16, 2025 @WoosukKwon these tests are not deterministic due to temperature, I read values and apply the stderr; seems no change to accuracy to me. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor imkero commented May 16, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . The idea of this PR is similar to #14684 . And it is verified by both #14684 and this PR that such approach will gain some performance improvement. If the inference result slightly changed in this PR, maybe we should compare the generated m-rope pos seq and window_index seq output with those generated by main branch. Also check if we are testing with greedy decoding. By the way I suggest that we can keep image_grid_thw and video_grid_thw in CPU all the time by modifying vllm/multimodal/inputs.py::MultiModalKwargs::as_kwargs (here vLLM move all mm data to device by default, and still needed to move them back to host later) @staticmethod
def as_kwargs(
batched_inputs: BatchedTensorInputs,
*,
device: torch.types.Device,
) -> BatchedTensorInputs:
json_inputs = cast(JSONTree[torch.Tensor], batched_inputs) + # keep Qwen2/2.5-VL's image_grid_thw and video_grid_thw in cpu + image_grid_thw = None + video_grid_thw = None + if isinstance(json_inputs, dict): + image_grid_thw = json_inputs.pop("image_grid_thw", None) + video_grid_thw = json_inputs.pop("video_grid_thw", None) json_mapped = json_map_leaves(
lambda x: x.to(device, non_blocking=True),
json_inputs,
) + if image_grid_thw is not None: + json_mapped["image_grid_thw"] = image_grid_thw # type: ignore + if video_grid_thw is not None: + json_mapped["video_grid_thw"] = video_grid_thw # type: ignore return cast(BatchedTensorInputs, json_mapped) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented May 16, 2025 @simon-mo @imkero Thanks for the explanation. Ok let's merge this PR for v0.9.0 and further improve it with @imkero 's suggestion All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details WoosukKwon merged commit 67da572 into vllm-project : main May 16, 2025 65 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author vadiklyutiy commented May 16, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . @WoosukKwon As @simon-mo said lm_eval isn't deterministic. To dispel doubts in correctness I wrote the following test that compare "before" and "after" implementations. In test I took Qwen2_5_VisionTransformer before and after and copy to test. Clean both to calculate only rotary_pos_emb , window_index , cu_window_seqlens , and cu_seqlens . Test takes arbitrary grid_thw , run both version and compare results. Test accept following args --samples number of different grid to test --max-t max value of t --max-h max value of h --max-w max value of w --max-images - len(grid_thw) The following runs successfully passed: $ python test_qwen25_vl_transformer.py --mass-test --samples 10000 --max-t 50 --max-h 100 --max-w 100 --max-images 5 $python test_qwen25_vl_transformer.py --mass-test --samples 10000 --max-t 100 --max-h 250 --max-w 250 --max-images 10 Hope that resolved worries about correctness Test source import torch import torch . nn as nn import torch . nn . functional as F from functools import lru_cache import argparse import numpy as np import random import tqdm import sys class TestFailureException ( Exception ): """Exception raised when the test results don't match between old and new implementations.""" pass class Qwen2_5_VisionRotaryEmbedding ( nn . Module ): def __init__ ( self , dim : int , theta : float = 10000.0 ) -> None : super (). __init__ () self . dim = dim self . theta = theta inv_freq = 1.0 / ( theta ** ( torch . arange ( 0 , dim , 2 , dtype = torch . float , device = 'cpu' ) / dim )) self . register_buffer ( "inv_freq" , inv_freq , persistent = False ) self . _seq_len_cached = 0 self . _freqs_cached = None def update_freqs_cache ( self , seqlen : int ) -> None : if seqlen > self . _seq_len_cached : seqlen *= 2 self . _seq_len_cached = seqlen self . inv_freq = 1.0 / ( self . theta ** ( torch . arange ( 0 , self . dim , 2 , dtype = torch . float , device = self . inv_freq . device ) / self . dim )) seq = torch . arange ( seqlen , device = self . inv_freq . device , dtype = self . inv_freq . dtype ) freqs = torch . outer ( seq , self . inv_freq ) self . _freqs_cached = freqs def forward ( self , seqlen : int ) -> torch . Tensor : self . update_freqs_cache ( seqlen ) return self . _freqs_cached [: seqlen ] class Qwen2_5_VisionTransformer_New ( nn . Module ): def __init__ ( self , hidden_size = 1152 , num_heads = 16 , window_size = 32 , patch_size = 14 , spatial_merge_size = 2 , fullatt_block_indexes = [ 0 , 1 , 2 , 3 , 8 , 9 , 10 , 11 , 16 , 17 , 18 , 19 , 24 , 25 , 26 , 27 ],
) -> None : super (). __init__ () self . hidden_size = hidden_size self . num_heads = num_heads self . window_size = window_size self . patch_size = patch_size self . spatial_merge_size = spatial_merge_size self . fullatt_block_indexes = fullatt_block_indexes self . spatial_merge_unit = self . spatial_merge_size ** 2 head_dim = self . hidden_size // self . num_heads self . rotary_pos_emb = Qwen2_5_VisionRotaryEmbedding ( head_dim // 2 ) @ property def dtype ( self ) -> torch . dtype : return torch . float32 @ property def device ( self ) -> torch . device : return torch . device ( 'cpu' ) def rotary_pos_emb_thw ( self , t , h , w ): hpos_ids = torch . arange ( h ). unsqueeze ( 1 ). expand ( - 1 , w ) wpos_ids = torch . arange ( w ). unsqueeze ( 0 ). expand ( h , - 1 ) hpos_ids = hpos_ids . reshape ( h // self . spatial_merge_size , self . spatial_merge_size , w // self . spatial_merge_size , self . spatial_merge_size ,
). permute ( 0 , 2 , 1 , 3 ). flatten () wpos_ids = wpos_ids . reshape ( h // self . spatial_merge_size , self . spatial_merge_size , w // self . spatial_merge_size , self . spatial_merge_size ,
). permute ( 0 , 2 , 1 , 3 ). flatten () pos_ids = torch . stack ([ hpos_ids , wpos_ids ], dim = - 1 ). repeat ( t , 1 ) max_size = max ( h , w ) rotary_pos_emb_full = self . rotary_pos_emb ( max_size ) rotary_pos_emb = rotary_pos_emb_full [ pos_ids ]. flatten ( 1 ) rotary_pos_emb = rotary_pos_emb . reshape ( rotary_pos_emb . shape [ 0 ] // self . spatial_merge_unit , self . spatial_merge_unit , - 1 ) return rotary_pos_emb def get_window_index_thw ( self , grid_t , grid_h , grid_w ): vit_merger_window_size = ( self . window_size // self . spatial_merge_size // self . patch_size ) llm_grid_h = grid_h // self . spatial_merge_size llm_grid_w = grid_w // self . spatial_merge_size index = torch . arange ( grid_t * llm_grid_h * llm_grid_w ). reshape ( grid_t , llm_grid_h , llm_grid_w ) pad_h = vit_merger_window_size - llm_grid_h % vit_merger_window_size pad_w = vit_merger_window_size - llm_grid_w % vit_merger_window_size num_windows_h = ( llm_grid_h + pad_h ) // vit_merger_window_size num_windows_w = ( llm_grid_w + pad_w ) // vit_merger_window_size index_padded = F . pad ( index , ( 0 , pad_w , 0 , pad_h ), 'constant' , - 100 ) index_padded = index_padded . reshape ( grid_t , num_windows_h , vit_merger_window_size , num_windows_w , vit_merger_window_size ) index_padded = index_padded . permute ( 0 , 1 , 3 , 2 , 4 ). reshape ( grid_t , num_windows_h * num_windows_w , vit_merger_window_size , vit_merger_window_size ) seqlens = ( index_padded != - 100 ). sum ([ 2 , 3 ]). reshape ( - 1 ) index_padded = index_padded . reshape ( - 1 ) index_new = index_padded [ index_padded != - 100 ] cu_seqlens_tmp = seqlens . cumsum ( 0 ) * self . spatial_merge_unit cu_seqlens_tmp = cu_seqlens_tmp . to ( dtype = torch . int32 ) cu_seqlens_tmp = torch . unique_consecutive ( cu_seqlens_tmp ) return index_new , cu_seqlens_tmp @ lru_cache ( maxsize = 1024 ) # noqa: B019 def get_rope_by_thw ( self , t , h , w ): window_index_thw , cu_seqlens_window_thw = self . get_window_index_thw ( t , h , w ) rotary_pos_emb_thw = self . rotary_pos_emb_thw ( t , h , w ) rotary_pos_emb_thw = rotary_pos_emb_thw [ window_index_thw , :, :] rotary_pos_emb_thw = rotary_pos_emb_thw . flatten ( start_dim = 0 , end_dim = 1 ) cu_seqlens_thw = torch . repeat_interleave ( torch . tensor ([ h * w ], dtype = torch . int32 ), t ) return ( rotary_pos_emb_thw , window_index_thw , cu_seqlens_window_thw , cu_seqlens_thw ) def process_grid_thw ( self , grid_thw ): rotary_pos_emb = [] window_index = [] cu_window_seqlens = [ torch . tensor ([ 0 ], dtype = torch . int32 )] cu_seqlens = [] window_index_id = 0 cu_window_seqlens_last = 0 for t , h , w in grid_thw : t , h , w = int ( t ), int ( h ), int ( w ) llm_h = h // self . spatial_merge_size llm_w = w // self . spatial_merge_size ( rotary_pos_emb_thw , window_index_thw , cu_seqlens_window_thw , cu_seqlens_thw ,
) = self . get_rope_by_thw ( t , h , w ) window_index . append ( window_index_thw + window_index_id ) window_index_id += ( t * llm_h * llm_w ) cu_seqlens_window_thw = ( cu_seqlens_window_thw + cu_window_seqlens_last ) cu_window_seqlens_last = cu_seqlens_window_thw [ - 1 ] cu_window_seqlens . append ( cu_seqlens_window_thw ) rotary_pos_emb . append ( rotary_pos_emb_thw ) cu_seqlens . append ( cu_seqlens_thw ) rotary_pos_emb = torch . cat ( rotary_pos_emb ) window_index = torch . cat ( window_index ) cu_window_seqlens = torch . cat ( cu_window_seqlens ) cu_window_seqlens = torch . unique_consecutive ( cu_window_seqlens ) cu_seqlens = torch . cat ( cu_seqlens ) cu_seqlens = torch . cumsum ( cu_seqlens , dim = 0 , dtype = torch . int32 ) cu_seqlens = F . pad ( cu_seqlens , ( 1 , 0 ), "constant" , 0 ) return rotary_pos_emb , window_index , cu_window_seqlens , cu_seqlens class Qwen2_5_VisionTransformer_Old ( nn . Module ): def __init__ ( self , hidden_size = 1152 , num_heads = 16 , window_size = 32 , patch_size = 14 , spatial_merge_size = 2 , fullatt_block_indexes = [ 0 , 1 , 2 , 3 , 8 , 9 , 10 , 11 , 16 , 17 , 18 , 19 , 24 , 25 , 26 , 27 ],
) -> None : super (). __init__ () self . hidden_size = hidden_size self . num_heads = num_heads self . window_size = window_size self . patch_size = patch_size self . spatial_merge_size = spatial_merge_size self . fullatt_block_indexes = fullatt_block_indexes self . spatial_merge_unit = self . spatial_merge_size ** 2 head_dim = self . hidden_size // self . num_heads self . rotary_pos_emb = Qwen2_5_VisionRotaryEmbedding ( head_dim // 2 ) @ property def dtype ( self ) -> torch . dtype : return torch . float32 @ property def device ( self ) -> torch . device : return torch . device ( 'cpu' ) def rot_pos_emb ( self , grid_thw : torch . Tensor ) -> torch . Tensor : pos_ids = [] for t , h , w in grid_thw : hpos_ids = torch . arange ( h ). unsqueeze ( 1 ). expand ( - 1 , w ) wpos_ids = torch . arange ( w ). unsqueeze ( 0 ). expand ( h , - 1 ) hpos_ids = hpos_ids . reshape ( h // self . spatial_merge_size , self . spatial_merge_size , w // self . spatial_merge_size , self . spatial_merge_size ,
). permute ( 0 , 2 , 1 , 3 ). flatten () wpos_ids = wpos_ids . reshape ( h // self . spatial_merge_size , self . spatial_merge_size , w // self . spatial_merge_size , self . spatial_merge_size ,
). permute ( 0 , 2 , 1 , 3 ). flatten () pos_ids . append ( torch . stack ([ hpos_ids , wpos_ids ], dim = - 1 ). repeat ( t , 1 )) pos_ids = torch . cat ( pos_ids , dim = 0 ) max_grid_size = grid_thw [:, 1 :]. max () rotary_pos_emb_full = self . rotary_pos_emb ( max_grid_size ) rotary_pos_emb = rotary_pos_emb_full [ pos_ids ]. flatten ( 1 ) return rotary_pos_emb def get_window_index ( self , grid_thw ): window_index : list = [] cu_window_seqlens : list = [ 0 ] window_index_id = 0 vit_merger_window_size = ( self . window_size // self . spatial_merge_size // self . patch_size ) for grid_t , grid_h , grid_w in grid_thw : llm_grid_h = grid_h // self . spatial_merge_size llm_grid_w = grid_w // self . spatial_merge_size index = torch . arange ( grid_t * llm_grid_h * llm_grid_w ). reshape ( grid_t , llm_grid_h , llm_grid_w ) pad_h = vit_merger_window_size - llm_grid_h % vit_merger_window_size pad_w = vit_merger_window_size - llm_grid_w % vit_merger_window_size num_windows_h = ( llm_grid_h + pad_h ) // vit_merger_window_size num_windows_w = ( llm_grid_w + pad_w ) // vit_merger_window_size index_padded = F . pad ( index , ( 0 , pad_w , 0 , pad_h ), 'constant' , - 100 ) index_padded = index_padded . reshape ( grid_t , num_windows_h , vit_merger_window_size , num_windows_w , vit_merger_window_size ) index_padded = index_padded . permute ( 0 , 1 , 3 , 2 , 4 ). reshape ( grid_t , num_windows_h * num_windows_w , vit_merger_window_size , vit_merger_window_size ) seqlens = ( index_padded != - 100 ). sum ([ 2 , 3 ]). reshape ( - 1 ) index_padded = index_padded . reshape ( - 1 ) index_new = index_padded [ index_padded != - 100 ] window_index . append ( index_new + window_index_id ) cu_seqlens_tmp = seqlens . cumsum ( 0 ) * self . spatial_merge_unit + cu_window_seqlens [ - 1 ] cu_window_seqlens . extend ( cu_seqlens_tmp . tolist ()) window_index_id += ( grid_t * llm_grid_h * llm_grid_w ). item () window_index = torch . cat ( window_index , dim = 0 ) return window_index , cu_window_seqlens def compute_attn_mask_seqlen ( self , cu_seqlens : torch . Tensor ,
) -> tuple [ None , None ]: return None , None def process_grid_thw ( self , grid_thw_list ): # Convert list to tensor for compatibility with old model grid_thw = torch . tensor ( grid_thw_list , dtype = torch . int32 ) # Compute positional embeddings rotary_pos_emb = self . rot_pos_emb ( grid_thw ) # Compute window indices and seqlens window_index , cu_window_seqlens = self . get_window_index ( grid_thw ) cu_window_seqlens = torch . tensor ( cu_window_seqlens , device = window_index . device , dtype = torch . int32 ) cu_window_seqlens = torch . unique_consecutive ( cu_window_seqlens ) # Compute sequence lengths cu_seqlens = torch . repeat_interleave ( grid_thw [:, 1 ] * grid_thw [:, 2 ], grid_thw [:, 0 ]). cumsum ( dim = 0 , dtype = torch . int32 ) cu_seqlens = F . pad ( cu_seqlens , ( 1 , 0 ), "constant" , 0 ) return rotary_pos_emb , window_index , cu_window_seqlens , cu_seqlens def tensor_equals ( t1 , t2 , name = None , rtol = 1e-5 , atol = 1e-5 ): if t1 . shape != t2 . shape : if name : print ( f"✗ { name } shapes differ: { t1 . shape } vs { t2 . shape } " ) return False equal = torch . allclose ( t1 , t2 , rtol = rtol , atol = atol ) if not equal : # Find the positions where they differ diff_mask = ~ torch . isclose ( t1 , t2 , rtol = rtol , atol = atol ) if diff_mask . sum () > 0 : diff_pos = diff_mask . nonzero () first_diff = diff_pos [ 0 ]. tolist () t1_val = t1 [ tuple ( first_diff )] t2_val = t2 [ tuple ( first_diff )] if name : print ( f"✗ { name } values differ at { first_diff } : { t1_val } vs { t2_val } " ) print ( f"Total number of different values: { diff_mask . sum (). item () } / { t1 . numel () } " ) else : if name : print ( f"✗ { name } values differ but couldn't identify position" ) # Print some stats about the differences if name and t1 . numel () < 100 : print ( f"Old: { t1 . flatten (). tolist () } " ) print ( f"New: { t2 . flatten (). tolist () } " ) return False if name : print ( f"✓ { name } matched" ) return True def run_test ( grid_thw , verbose = True ): # Create models new_model = Qwen2_5_VisionTransformer_New () old_model = Qwen2_5_VisionTransformer_Old () if verbose : print ( " \n Testing with grid_thw:" , grid_thw ) # Test the new model rotary_pos_emb_new , window_index_new , cu_window_seqlens_new , cu_seqlens_new = new_model . process_grid_thw ( grid_thw ) if verbose : print ( " \n New model outputs:" ) print ( f"rotary_pos_emb shape: { rotary_pos_emb_new . shape } " ) print ( f"window_index shape: { window_index_new . shape } " ) print ( f"cu_window_seqlens shape: { cu_window_seqlens_new . shape } " ) print ( f"cu_seqlens shape: { cu_seqlens_new . shape } " ) # Test the old model rotary_pos_emb_old , window_index_old , cu_window_seqlens_old , cu_seqlens_old = old_model . process_grid_thw ( grid_thw ) if verbose : print ( " \n Old model outputs:" ) print ( f"rotary_pos_emb shape: { rotary_pos_emb_old . shape } " ) print ( f"window_index shape: { window_index_old . shape } " ) print ( f"cu_window_seqlens shape: { cu_window_seqlens_old . shape } " ) print ( f"cu_seqlens shape: { cu_seqlens_old . shape } " ) # Compare outputs if verbose : print ( " \n Comparing outputs:" ) match_rotary = tensor_equals ( rotary_pos_emb_old , rotary_pos_emb_new , "rotary_pos_emb" if verbose else None ) match_window = tensor_equals ( window_index_old , window_index_new , "window_index" if verbose else None ) match_cu_window = tensor_equals ( cu_window_seqlens_old , cu_window_seqlens_new , "cu_window_seqlens" if verbose else None ) match_cu_seq = tensor_equals ( cu_seqlens_old , cu_seqlens_new , "cu_seqlens" if verbose else None ) all_match = match_rotary and match_window and match_cu_window and match_cu_seq if verbose : print ( f" \n All outputs match: { all_match } " ) if not all_match : error_msg = f"Test failed for grid_thw= { grid_thw } : Outputs between old and new implementations do not match" raise TestFailureException ( error_msg ) return all_match def run_mass_test ( t_range = ( 1 , 50 ), h_range = ( 1 , 250 ), w_range = ( 1 , 250 ), num_samples = 100 , max_images_per_sample = 1 , seed = 42 ): """ Run mass testing by sampling grid_thw configurations from the specified ranges. Args: t_range: Tuple of (min_t, max_t) h_range: Tuple of (min_h, max_h) w_range: Tuple of (min_w, max_w) num_samples: Number of random samples to test max_images_per_sample: Maximum number of images per sample seed: Random seed for reproducibility """ random . seed ( seed ) # Ensure minimum h and w values are at least 2 (spatial_merge_size) # This is required by the model architecture min_t = max ( 1 , t_range [ 0 ]) min_h = max ( 2 , h_range [ 0 ]) # Minimum must be at least spatial_merge_size min_w = max ( 2 , w_range [ 0 ]) # Minimum must be at least spatial_merge_size max_t = t_range [ 1 ] max_h = h_range [ 1 ] max_w = w_range [ 1 ] t_range = ( min_t , max_t ) h_range = ( min_h , max_h ) w_range = ( min_w , max_w ) print ( f"Running mass testing with { num_samples } samples" ) print ( f"T range: { t_range } " ) print ( f"H range: { h_range } " ) print ( f"W range: { w_range } " ) print ( f"Max images per sample: { max_images_per_sample } " ) # Include edge cases edge_cases = [ # Smallest valid values [[ min_t , min_h , min_w ]], # Largest values [[ max_t , max_h , max_w ]], # Min t, max h, w [[ min_t , max_h , max_w ]], # Max t, min h, w [[ max_t , min_h , min_w ]], # Mixed values [[ min_t , max_h , min_w ]],
[[ max_t , min_h , max_w ]], # Values divisible by window_size/spatial_merge_size/patch_size [[ min_t , 16 , 16 ]], # 16 = 32/2/1 (window_size/spatial_merge_size/1) [[ min_t , 32 , 32 ]], # 32 = 32/2/0.5 (window_size/spatial_merge_size/0.5) ] # Add multi-image edge cases if max_images_per_sample > 1 if max_images_per_sample > 1 : multi_image_edge_cases = [ # Multiple small images [[ min_t , min_h , min_w ], [ min_t , min_h , min_w ]], # One small, one large [[ min_t , min_h , min_w ], [ max_t , max_h , max_w ]], # Maximum number of images with varied sizes [[ min_t , min_h , min_w ]] * max_images_per_sample ,
] edge_cases . extend ( multi_image_edge_cases ) # Test edge cases first print ( " \n Testing edge cases:" ) for i , grid_thw in enumerate ( edge_cases ): try : print ( f"Edge case { i + 1 } / { len ( edge_cases ) } : { grid_thw } " ) run_test ( grid_thw , verbose = False ) print ( f"✓ Edge case { i + 1 } passed" ) except TestFailureException as e : print ( f" \n ERROR: { e } " ) return False except Exception as e : print ( f" \n Unexpected error for grid_thw= { grid_thw } : { e } " ) print ( f"Exception details: { type ( e ). __name__ } : { e } " ) return False # Generate random samples for the mass test samples = [] for _ in range ( num_samples ): # Decide how many images to include in this sample num_images = random . randint ( 1 , max_images_per_sample ) # Generate grid_thw for each image sample = [] for _ in range ( num_images ): t = random . randint ( min_t , max_t ) h = random . randint ( min_h , max_h ) w = random . randint ( min_h , max_w ) # Ensure h and w are multiples of spatial_merge_size (2) h = ( h // 2 ) * 2 w = ( w // 2 ) * 2 if h == 0 : h = 2 if w == 0 : w = 2 sample . append ([ t , h , w ]) samples . append ( sample ) # Run the mass test with a progress bar print ( f" \n Running { num_samples } random samples:" ) progress_bar = tqdm . tqdm ( total = num_samples ) for i , grid_thw in enumerate ( samples ): try : run_test ( grid_thw , verbose = False ) progress_bar . update ( 1 ) except TestFailureException as e : progress_bar . close () print ( f" \n ERROR at sample { i + 1 } / { num_samples } : { e } " ) return False except Exception as e : progress_bar . close () print ( f" \n Unexpected error at sample { i + 1 } / { num_samples } for grid_thw= { grid_thw } : { e } " ) print ( f"Exception details: { type ( e ). __name__ } : { e } " ) return False progress_bar . close () print ( f" \n All { num_samples } samples passed successfully!" ) return True if __name__ == "__main__" : parser = argparse . ArgumentParser ( description = 'Test Qwen2.5-VL Vision Transformer' ) parser . add_argument ( '--grid_t' , type = int , default = 1 , help = 'Grid size T' ) parser . add_argument ( '--grid_h' , type = int , default = 36 , help = 'Grid size H' ) parser . add_argument ( '--grid_w' , type = int , default = 36 , help = 'Grid size W' ) parser . add_argument ( '--multiple' , action = 'store_true' , help = 'Test with multiple images' ) parser . add_argument ( '--large' , action = 'store_true' , help = 'Test with many high-resolution images' ) parser . add_argument ( '--mass-test' , action = 'store_true' , help = 'Run mass testing with many grid configurations' ) parser . add_argument ( '--samples' , type = int , default = 100 , help = 'Number of samples for mass testing' ) parser . add_argument ( '--seed' , type = int , default = 42 , help = 'Random seed for mass testing' ) parser . add_argument ( '--max-t' , type = int , default = 50 , help = 'Maximum T value for mass testing' ) parser . add_argument ( '--max-h' , type = int , default = 250 , help = 'Maximum H value for mass testing' ) parser . add_argument ( '--max-w' , type = int , default = 250 , help = 'Maximum W value for mass testing' ) parser . add_argument ( '--max-images' , type = int , default = 1 , help = 'Maximum number of images per sample for mass testing' ) args = parser . parse_args () if args . mass_test : success = run_mass_test ( t_range = ( 1 , args . max_t ), h_range = ( 1 , args . max_h ), w_range = ( 1 , args . max_w ), num_samples = args . samples , max_images_per_sample = args . max_images , seed = args . seed ) sys . exit ( 0 if success else 1 ) else : if args . large : # Test with a large number of high-resolution images/videos grid_thw = [
[ 1 , 224 , 224 ], # High-res image 1 [ 1 , 112 , 112 ], # Medium-res image [ 4 , 96 , 96 ], # Video 1 [ 1 , 168 , 168 ], # Another image [ 2 , 128 , 224 ], # Video 2 [ 1 , 224 , 224 ], # High-res image 2 [ 3 , 64 , 128 ], # Video 3 [ 1 , 96 , 96 ], # Small image [ 6 , 64 , 64 ], # Longer video [ 1 , 192 , 192 ] # Another image ] print ( "Testing with large dataset (many high-resolution images/videos)" ) elif args . multiple : # Test with multiple images grid_thw = [
[ 1 , 36 , 36 ], # First image [ 2 , 48 , 64 ], # Second image (video) [ 1 , 24 , 24 ] # Third image ] print ( "Testing with multiple images" ) else : # Test with a single image grid_thw = [[ args . grid_t , args . grid_h , args . grid_w ]] try : # Run correctness test run_test ( grid_thw ) print ( " \n Test completed successfully!" ) except TestFailureException as e : print ( f" \n ERROR: { e } " ) sys . exit ( 1 ) # Exit with error code 👍 1 WoosukKwon reacted with thumbs up emoji ❤️ 1 WoosukKwon reacted with heart emoji All reactions 👍 1 reaction ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . zzzyq pushed a commit
to zzzyq/vllm
that referenced
this pull request May 24, 2025 [PERF] Speed up Qwen2.5-VL model by speed up rotary position embedding ( … 92d9cdb vllm-project#17973 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]> mergify bot added
the qwen Related to Qwen models label Jun 19, 2025 minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [PERF] Speed up Qwen2.5-VL model by speed up rotary position embedding ( … 65b6ec6 vllm-project#17973 )
Signed-off-by: Vadim Gimpelson <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:07
|
015069b01741e9ecb9e604c7fe87fbdfc306ebe5
|
https://github.com/vllm-project/vllm/pull/17515
| false | false | false | true |
TEST: test, CI, CI
|
Copy link Contributor chaunceyjiang commented May 1, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . FIX #17369 (comment) Use string partition instead of regex Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [Misc]: Optimize the Qwen3_ReasoningParser extract_reasoning_content … 493c2a8 Signed-off-by: chaunceyjiang <[email protected]> Copy link github-actions bot commented May 1, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . chaunceyjiang changed the title [Misc]: Optimize the Qwen3_ReasoningParser extract_reasoning_content [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content May 1, 2025 Copy link Contributor Author chaunceyjiang commented May 1, 2025 /cc @gaocegege PTAL. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gaocegege reviewed May 1, 2025 View reviewed changes Copy link Contributor gaocegege left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment We could remove self.reasoning_regex Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [Misc]: Optimize the Qwen3_ReasoningParser extract_reasoning_content … d165310 Signed-off-by: chaunceyjiang <[email protected]> Copy link Contributor Author chaunceyjiang commented May 1, 2025 We could remove self.reasoning_regex Done. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . gaocegege approved these changes May 1, 2025 View reviewed changes Copy link Contributor Author chaunceyjiang commented May 1, 2025 /cc @DarkLight1337 PTAL. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 approved these changes May 1, 2025 View reviewed changes Hide details View details vllm-bot merged commit 015069b into vllm-project : main May 1, 2025 20 of 21 checks passed Uh oh! There was an error while loading. Please reload this page . chaunceyjiang deleted the qwen3_opttimize branch May 1, 2025 10:41 radeksm pushed a commit
to radeksm/vllm
that referenced
this pull request May 2, 2025 [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content ( v… … 2429b35 …llm-project#17515 )
Signed-off-by: chaunceyjiang <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content ( v… … 69172c0 …llm-project#17515 )
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: Mu Huai <[email protected]> zzzyq pushed a commit
to zzzyq/vllm
that referenced
this pull request May 24, 2025 [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content ( v… … c15fd57 …llm-project#17515 )
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]> minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content ( v… … 2891c03 …llm-project#17515 )
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:10
|
bc7c4d206bbfb56b06d218b6c2971e8ca191db36
|
https://github.com/vllm-project/vllm/pull/13305
| true | true | true | true |
LM_EVAL: lm_eval, lm_eval, gsm8k | PERF: ttft, TTFT, TTFT | SERVING: Serving, Serving, Serving | TEST: test, Test, test
|
Copy link Contributor maleksan85 commented Feb 14, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Speed up prefix prefill with vLLM V1 on AMG GPUs Improvements: Vectorization in the context loop (most complex one as k cache shape is very specific) Refactoring for online softmax computation Refactoring to the kernel so autotune might select the best configs per shape Plus adding new spectrum of unrolling/staging in autotuner More details on triton kernel tunning: https://rocm.docs.amd.com/en/docs-6.1.1/how-to/llm-fine-tuning-optimization/optimizing-triton-kernel.html see last comments Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions SageMoore added 30 commits February 5, 2025 20:42 init … b6b00d7 Signed-off-by: Sage Moore <[email protected]> temporarily remove torch from requirements-build … fa52268 Signed-off-by: Sage Moore <[email protected]> move rocm logic to its own attention backend … f563276 Signed-off-by: Sage Moore <[email protected]> actually add backend … 2a03b92 Signed-off-by: Sage Moore <[email protected]> more rocm refactoring … 4bdf7de Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … 875fcfc …amd-v1 more rocm refactoring … e507e30 Signed-off-by: Sage Moore <[email protected]> hack to fix the multiprocessing isssue … b9ce259 Signed-off-by: Sage Moore <[email protected]> minor print fix … f2cc5e3 Signed-off-by: Sage Moore <[email protected]> remove cruft … d6f6c5c Signed-off-by: Sage Moore <[email protected]> format … 2bf214a Signed-off-by: Sage Moore <[email protected]> modify requirements files … 11411cb Signed-off-by: Sage Moore <[email protected]> remove basic.py changes … c2499bf Signed-off-by: Sage Moore <[email protected]> cleanup … cf6f691 Signed-off-by: Sage Moore <[email protected]> add support for passing in softmax scales to the context_attn_fwd … 4505f53 Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … 9a0416a …amd-v1 added requirements-rocm-build … ef9ae86 Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … 0ccef65 …amd-v1 minor setup.py fix … a00a2d9 Signed-off-by: Sage Moore <[email protected]> add batch size back in … afb15f5 Signed-off-by: Sage Moore <[email protected]> revert setup.py change … 08a25b7 Signed-off-by: Sage Moore <[email protected]> update setup.py … 55eb036 Signed-off-by: Sage Moore <[email protected]> init … 95df571 Signed-off-by: Sage Moore <[email protected]> init … 0bfe435 Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … 4b62de2 …amd-v1
Signed-off-by: Sage Moore <[email protected]> minor fix … d2f3c85 Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … 442bc7b …amd-v1 minor fix … 9472636 Signed-off-by: Sage Moore <[email protected]> Merge branch 'main' of https://github.com/neuralmagic/vllm into sage/… … c7497f3 …prefix-prefill-refactor update error messages … 21d8d6a Signed-off-by: Sage Moore <[email protected]> 83 hidden items Load more… Copy link Contributor Author maleksan85 commented Apr 8, 2025 HIP_VISIBLE_DEVICES=6 VLLM_ENABLE_V1_MULTIPROCESSING=0 VLLM_USE_V1=1 lm_eval --model vllm --model_args pretrained=/data/models/Llama-3.1-8B-Instruct --tasks gsm8k --num_fewshot 5 --batch_size auto -
-limit 500 2025-04-08:18:10:02,846 INFO [lm_eval.loggers.evaluation_tracker:272] Output path not provided, skipping saving results aggregated vllm (pretrained=/data/models/Llama-3.1-8B-Instruct), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.808 ± 0.0176 strict-match 5 exact_match ↑ 0.782 ± 0.0185 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author maleksan85 commented Apr 8, 2025 python3 benchmarks/benchmark_serving.py --backend vllm --model /data/models/Llama-3.1-70B-Instruct --dataset-name random --random-input-len 10000 --random-output-len 100 --num-prompts 300 --seed 42 --ignore-eos --percentile-metrics "ttft,tpot,itl,e2el" PR (like 20% gain) ============ Serving Benchmark Result ============
Successful requests: 300
Benchmark duration (s): 409.78
Total input tokens: 3000000
Total generated tokens: 30000
Request throughput (req/s): 0.73
Output token throughput (tok/s): 73.21
Total Token throughput (tok/s): 7394.28
---------------Time to First Token----------------
Mean TTFT (ms): 205042.73
Median TTFT (ms): 203406.19
P99 TTFT (ms): 400609.81
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 1610.15
Median TPOT (ms): 2027.83
P99 TPOT (ms): 2239.19
---------------Inter-token Latency----------------
Mean ITL (ms): 1610.15
Median ITL (ms): 80.56
P99 ITL (ms): 5252.32
----------------End-to-end Latency----------------
Mean E2EL (ms): 364447.21
Median E2EL (ms): 404161.34
P99 E2EL (ms): 409588.24
================================================== Upstream ============ Serving Benchmark Result ============
Successful requests: 300
Benchmark duration (s): 498.15
Total input tokens: 3000000
Total generated tokens: 30000
Request throughput (req/s): 0.60
Output token throughput (tok/s): 60.22
Total Token throughput (tok/s): 6082.51
---------------Time to First Token----------------
Mean TTFT (ms): 249095.71
Median TTFT (ms): 248711.87
P99 TTFT (ms): 488484.85
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 1957.47
Median TPOT (ms): 2462.50
P99 TPOT (ms): 2732.60
---------------Inter-token Latency----------------
Mean ITL (ms): 1957.47
Median ITL (ms): 80.32
P99 ITL (ms): 8005.81
----------------End-to-end Latency----------------
Mean E2EL (ms): 442885.68
Median E2EL (ms): 492500.58
P99 E2EL (ms): 497952.19
================================================== All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author maleksan85 commented Apr 8, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . python3 benchmarks/benchmark_serving.py --backend vllm --model /data/models/Llama-3.1-70B-Instruct --dataset-name random --random-input-len 5000 --random-output-len 100 --num-prompts 500 --seed 42 --ignore-eos --percentile-metrics "ttft,tpot,itl,e2el" PR (10% gain) ============ Serving Benchmark Result ============
Successful requests: 500
Benchmark duration (s): 319.37
Total input tokens: 2500000
Total generated tokens: 50000
Request throughput (req/s): 1.57
Output token throughput (tok/s): 156.56
Total Token throughput (tok/s): 7984.50
---------------Time to First Token----------------
Mean TTFT (ms): 155485.39
Median TTFT (ms): 149836.40
P99 TTFT (ms): 310684.27
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 1219.18
Median TPOT (ms): 1556.81
P99 TPOT (ms): 1629.28
---------------Inter-token Latency----------------
Mean ITL (ms): 1219.18
Median ITL (ms): 77.67
P99 ITL (ms): 4265.61
----------------End-to-end Latency----------------
Mean E2EL (ms): 276184.44
Median E2EL (ms): 310784.82
P99 E2EL (ms): 319205.24
================================================== Upstream ============ Serving Benchmark Result ============
Successful requests: 500
Benchmark duration (s): 355.99
Total input tokens: 2500000
Total generated tokens: 50000
Request throughput (req/s): 1.40
Output token throughput (tok/s): 140.45
Total Token throughput (tok/s): 7163.04
---------------Time to First Token----------------
Mean TTFT (ms): 172121.19
Median TTFT (ms): 162339.60
P99 TTFT (ms): 349045.74
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 1369.76
Median TPOT (ms): 1699.35
P99 TPOT (ms): 1892.04
---------------Inter-token Latency----------------
Mean ITL (ms): 1369.76
Median ITL (ms): 78.00
P99 ITL (ms): 6167.44
----------------End-to-end Latency----------------
Mean E2EL (ms): 307727.51
Median E2EL (ms): 349138.54
P99 E2EL (ms): 355831.83
================================================== All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . root and others added 9 commits April 9, 2025 03:54 renaming kernel … 5d9a929 Signed-off-by: root <[email protected]>
Signed-off-by: <> clean up and fix for failed kernel tests … 27f044b Signed-off-by: Aleksandr Malyshev <[email protected]> clean up and fix for failed kernel tests … cfd60c9 Signed-off-by: Aleksandr Malyshev <[email protected]> clean up and fix for failed kernel tests … 0a26697 Signed-off-by: Aleksandr Malyshev <[email protected]> got rid of autotuner and get stable runs right from the first iteration … 35a6e49 Signed-off-by: maleksan85 <[email protected]> restoring paged attn as there is no autotuning anymore and that will … … 6d5b3f2 …no be error during start
Signed-off-by: maleksan85 <[email protected]> poking test rerun as one failed and seems not because of this change … 7140d1a Signed-off-by: maleksan85 <[email protected]> Merge branch 'main' of github.com:vllm-project/vllm into upstream_pre… … 169f714 …fix_prefill_speed_up Merge branch 'upstream/main' into upstream_prefix_prefill_speed_up f437b11 SageMoore reviewed Apr 14, 2025 View reviewed changes Copy link Contributor SageMoore left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks reasonable. Just a few nits. Thanks for all of the hard work making this kernel faster. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 maleksan85 reacted with heart emoji All reactions ❤️ 1 reaction vllm/attention/ops/prefix_prefill.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/attention/ops/prefix_prefill.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . maleksan85 added 4 commits April 14, 2025 22:43 comment correction … ba078b6 Signed-off-by: maleksan85 <[email protected]> dot operation in triton doesn't support k to be 8 so increasing block… … 617ef08 … size to most commonly used
Signed-off-by: maleksan85 <[email protected]> to kick CIs again Async Engine, Inputs, Utils, Worker Test seems flaky … 771ad9e Signed-off-by: maleksan85 <[email protected]> to kick CIs again … b6bf365 Signed-off-by: maleksan85 <[email protected]> bringlein mentioned this pull request Apr 16, 2025 [Kernel] Adding basic Triton JitCache for triton_attn #16606 Open Hide details View details vllm-bot merged commit bc7c4d2 into vllm-project : main Apr 23, 2025 41 of 46 checks passed Uh oh! There was an error while loading. Please reload this page . frieda-huang pushed a commit
to frieda-huang/vllm
that referenced
this pull request Apr 23, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … 5b0368a …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Signed-off-by: Frieda (Jingying) Huang <[email protected]> gshtras added a commit
to ROCm/vllm
that referenced
this pull request Apr 25, 2025 Upstream merge 2025 04 25 ( #524 ) … 28007b0 * [BugFix] Remove default multiproc executor `collective_rpc` timeout ( vllm-project#17000 )
Signed-off-by: Nick Hill <[email protected]>
* [Core][V1][TPU] Enable structured decoding on TPU V1 ( vllm-project#16499 )
Signed-off-by: Chenyaaang <[email protected]>
* [Bugfix] validate urls object for multimodal content parts ( vllm-project#16990 )
Signed-off-by: Guillaume Calmettes <[email protected]>
* add Dockerfile build vllm against torch nightly ( vllm-project#16936 )
Signed-off-by: Yang Wang <[email protected]>
* [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-project#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
* [V1][DP] More robust DP/EP dummy request coordination ( vllm-project#16277 )
Signed-off-by: Nick Hill <[email protected]>
* [BugFix] Revert ROCm Custom Paged Attention Env Flag Check ( vllm-project#17022 )
Signed-off-by: vllmellm <[email protected]>
* Revert "[Misc] Add S3 environment variables for better support of MinIO." ( vllm-project#17021 )
* [misc] tune some env vars for GB200 ( vllm-project#16992 )
Signed-off-by: youkaichao <[email protected]>
* [INTEL-HPU][v0] Port delayed sampling to upstream ( vllm-project#16949 )
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: Chendi Xue <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
* [doc] add download path tips ( vllm-project#17013 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [Bugfix] Triton FA function takes no keyword arguments ( vllm-project#16902 )
Signed-off-by: vllmellm <[email protected]>
* [V1] Avoid socket errors during shutdown when requests are in in-flight ( vllm-project#16807 )
Signed-off-by: Nick Hill <[email protected]>
* [BugFix] llama4 fa3 fix - RuntimeError: scheduler_metadata must have shape (metadata_size) ( vllm-project#16998 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Misc] Improve readability of get_open_port function. ( vllm-project#17024 )
Signed-off-by: gitover22 <[email protected]>
* [Bugfix] Fix AssertionError: skip_special_tokens=False is not supported for Mistral tokenizers ( vllm-project#16964 )
Signed-off-by: chaunceyjiang <[email protected]>
* [CI] Run v1/test_serial_utils.py in CI ( vllm-project#16996 )
Signed-off-by: Russell Bryant <[email protected]>
* Mistral-format support for compressed-tensors ( vllm-project#16803 )
Signed-off-by: mgoin <[email protected]>
* Categorize `tests/kernels/` based on kernel type ( vllm-project#16799 )
Signed-off-by: mgoin <[email protected]>
* [Doc] Add top anchor and a note to quantization/bitblas.md ( vllm-project#17042 )
Signed-off-by: windsonsea <[email protected]>
* Ensure that `pid` passed to `kill_process_tree` is `int` for `mypy` ( vllm-project#17051 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI] Update structured-output label automation ( vllm-project#17055 )
Signed-off-by: Russell Bryant <[email protected]>
* Improve Transformers backend model loading QoL ( vllm-project#17039 )
Signed-off-by: Harry Mellor <[email protected]>
* `CacheConfig.block_size` should always be `int` when used ( vllm-project#17052 )
Signed-off-by: Harry Mellor <[email protected]>
* Use `@property` and private field for `data_parallel_rank_local` ( vllm-project#17053 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Support guidance:no-additional-properties for compatibility with xgrammar ( vllm-project#15949 )
Signed-off-by: Travis Johnson <[email protected]>
* [BugFix][V1] Fix int32 token index overflow when preparing input ids ( vllm-project#16806 )
* [V1][Spec Decode] Always use argmax for sampling draft tokens ( vllm-project#16899 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [CI/Build] workaround for CI build failure ( vllm-project#17070 )
Signed-off-by: csy1204 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Quantization]add prefix for commandA quantized model ( vllm-project#17017 )
* [Minor] Use larger batch sizes for A100/B100/B200/MI300x ( vllm-project#17073 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Enable V1 usage stats ( vllm-project#16986 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* More informative error when using Transformers backend ( vllm-project#16988 )
Signed-off-by: Harry Mellor <[email protected]>
* Addendum Fix to support FIPS enabled machines with MD5 hashing ( vllm-project#17043 )
Signed-off-by: sydarb <[email protected]>
* [Bugfix][Core] add seq_id_to_seq_group clearing to avoid memory leak when s… ( vllm-project#16472 )
Signed-off-by: 开哲 <[email protected]>
Co-authored-by: 开哲 <[email protected]>
* [V1] Update structured output ( vllm-project#16812 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [doc] update to hyperlink ( vllm-project#17096 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* Add docs for runai_streamer_sharded ( vllm-project#17093 )
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [Chore] Remove Sampler from Model Code ( vllm-project#17084 )
Signed-off-by: Woosuk Kwon <[email protected]>
* Disable enforce_eager for V1 TPU sampler and structured output tests ( vllm-project#17016 )
Signed-off-by: mgoin <[email protected]>
* Simplify `TokenizerGroup` ( vllm-project#16790 )
Signed-off-by: Harry Mellor <[email protected]>
* Fix OOT registration test ( vllm-project#17099 )
Signed-off-by: Harry Mellor <[email protected]>
* [V1][PP] Optimization: continue scheduling prefill chunks ( vllm-project#17080 )
Signed-off-by: Rui Qiao <[email protected]>
* [Misc] Remove OLMo2 config copy ( vllm-project#17066 )
Signed-off-by: Isotr0py <[email protected]>
* Improve static type checking in `LoRAModelRunnerMixin` ( vllm-project#17104 )
Signed-off-by: Harry Mellor <[email protected]>
* [V1][Structured Output] Clear xgrammar compiler object when engine core shut down to avoid nanobind leaked warning ( vllm-project#16954 )
Signed-off-by: shen-shanshan <[email protected]>
* [Frontend] Using matryoshka_dimensions control the allowed output dimensions. ( vllm-project#16970 )
* Add missing rocm_skinny_gemms kernel test to CI ( vllm-project#17060 )
Signed-off-by: mgoin <[email protected]>
* [Misc] refactor example series - structured outputs ( vllm-project#17040 )
Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
* [V1][Spec Decoding] Add num_drafts and num_accepted_tokens_per_position metrics ( vllm-project#16665 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [CI] Add automation for the `tool-calling` github label ( vllm-project#17118 )
Signed-off-by: Russell Bryant <[email protected]>
* Updating builkite job for IBM Power ( vllm-project#17111 )
Signed-off-by: Aaruni Aggarwal <[email protected]>
* existing torch installation pip command fix for docs ( vllm-project#17059 )
* Molmo Requirements ( vllm-project#17026 )
Signed-off-by: Eyshika Agarwal <[email protected]>
Signed-off-by: eyshika <[email protected]>
* Add `:markdownhelp:` to `EngineArgs` docs so markdown docstrings render properly ( vllm-project#17124 )
Signed-off-by: Harry Mellor <[email protected]>
* Improve configs - `LoRAConfig` + `PromptAdapterConfig` ( vllm-project#16980 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Generate correct github links for decorated functions ( vllm-project#17125 )
Signed-off-by: Russell Bryant <[email protected]>
* Add collective_rpc to llm engine ( vllm-project#16999 )
Signed-off-by: Yinghai Lu <[email protected]>
* Add chat template for Llama 4 models ( vllm-project#16428 )
Signed-off-by: Max de Bayser <[email protected]>
* [Misc] Add example to run DeepSeek with Ray Serve LLM ( vllm-project#17134 )
Signed-off-by: Rui Qiao <[email protected]>
* Better error message for missing mistral params.json ( vllm-project#17132 )
Signed-off-by: mgoin <[email protected]>
* Use custom address for listening socket ( vllm-project#15988 )
Signed-off-by: Jens Glaser <[email protected]>
* [FEAT] [ROCm]: AITER Fused MOE V1 Support ( vllm-project#16752 )
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
* [Attention] FA3 decode perf improvement - single mma warp group support for head dim 128 ( vllm-project#16864 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* fix float16 support for kimi-vl ( vllm-project#17156 )
Co-authored-by: zhouzaida <[email protected]>
* [Doc] V1 : Update LoRA status ( vllm-project#17133 )
Signed-off-by: varun sundar rabindranath <[email protected]>
Co-authored-by: varun sundar rabindranath <[email protected]>
* [Docs] Fix True->true in supported_models.md ( vllm-project#17141 )
* Move missed `SchedulerConfig` args into scheduler config group in `EngineArgs` ( vllm-project#17131 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] Clean up redundant code in uniproc_executor.py ( vllm-project#16762 )
Signed-off-by: Lifu Huang <[email protected]>
* [Bugfix][Misc] Use TritonPlaceholderModule to defensively import triton ( vllm-project#15099 )
Signed-off-by: Mengqing Cao <[email protected]>
* [Misc] Benchmark Serving Script Support Appending Results ( vllm-project#17028 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Perf]Optimize rotary_emb implementation to use Triton operator for improved inference performance ( vllm-project#16457 )
Signed-off-by: cynthieye <[email protected]>
Co-authored-by: MagnetoWang <[email protected]>
* [Bugfix] remove fallback in guided_json (int range, patterns) ( vllm-project#16725 )
Signed-off-by: csy1204 <[email protected]>
Co-authored-by: 조상연[플레이스 AI] <[email protected]>
* [Quantization][FP8] Add support for FP8 models with input_scale for output projection and QK quantization ( vllm-project#15734 )
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
* [Doc] Add headings to improve gptqmodel.md ( vllm-project#17164 )
Signed-off-by: windsonsea <[email protected]>
* Only turn on FastIncrementalDetokenizer when tokenizers >= 0.21.1 ( vllm-project#17158 )
* [Doc] Add two links to disagg_prefill.md ( vllm-project#17168 )
Signed-off-by: windsonsea <[email protected]>
* [Doc] Move todo out of beam search docstring ( vllm-project#17183 )
Signed-off-by: Alex-Brooks <[email protected]>
* [Bugfix] Fix mistral model tests ( vllm-project#17181 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Bugfix] Fix Mistral ChatCompletionRequest Body Exception ( vllm-project#16769 )
Signed-off-by: Jasmond Loh <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* Fix API typo and remove FP8 on V1 restriction
---------
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Chenyaaang <[email protected]>
Signed-off-by: Guillaume Calmettes <[email protected]>
Signed-off-by: Yang Wang <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: Chendi Xue <[email protected]>
Signed-off-by: reidliu41 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: gitover22 <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: windsonsea <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: csy1204 <[email protected]>
Signed-off-by: sydarb <[email protected]>
Signed-off-by: 开哲 <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Aaruni Aggarwal <[email protected]>
Signed-off-by: Eyshika Agarwal <[email protected]>
Signed-off-by: eyshika <[email protected]>
Signed-off-by: Yinghai Lu <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Jens Glaser <[email protected]>
Signed-off-by: varun sundar rabindranath <[email protected]>
Signed-off-by: Lifu Huang <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: cynthieye <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Jasmond Loh <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Chenyaaang <[email protected]>
Co-authored-by: Guillaume Calmettes <[email protected]>
Co-authored-by: Yang Wang <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Chendi.Xue <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: Reid <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: huafeng <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Michael Yao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Sangyeon Cho <[email protected]>
Co-authored-by: Chen Xia <[email protected]>
Co-authored-by: Areeb Syed <[email protected]>
Co-authored-by: 张宇 <[email protected]>
Co-authored-by: 开哲 <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: wang.yuqi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Aaruni Aggarwal <[email protected]>
Co-authored-by: Atilla <[email protected]>
Co-authored-by: Eyshika Agarwal <[email protected]>
Co-authored-by: Yinghai Lu <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: jglaser <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: zhouzaida <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: varun sundar rabindranath <[email protected]>
Co-authored-by: Lifu Huang <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: yexin(叶鑫) <[email protected]>
Co-authored-by: MagnetoWang <[email protected]>
Co-authored-by: 조상연[플레이스 AI] <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Jasmond L <[email protected]> jikunshang pushed a commit
to jikunshang/vllm
that referenced
this pull request Apr 29, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … c8ceba9 …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]> huydhn mentioned this pull request Apr 29, 2025 Fix some speculative decode tests with tl.dot #17371 Merged lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … 4bf77e2 …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]> adobrzyn pushed a commit
to HabanaAI/vllm-fork
that referenced
this pull request Apr 30, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … d4a8c54 …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Signed-off-by: Agata Dobrzyniewicz <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … f32d058 …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Signed-off-by: Mu Huai <[email protected]> ckhordiasma mentioned this pull request May 14, 2025 nm vllm ent 0.8.5 sync red-hat-data-services/vllm#139 Merged minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 ( vllm-pro… … b3ce066 …ject#13305 )
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:14
|
299ebb62b269ce167eb1c71b5e39a1dc1f65ce1c
|
https://github.com/vllm-project/vllm/pull/16436
| false | true | true | true |
PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, Serving, Serving | TEST: test, CI, CI
|
Copy link Contributor chanh commented Apr 10, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This code inside apply_penalties does advanced indexing on a tensor which triggers nonzero which requires a CPU sync currently with PyTorch. With torch.cuda.set_sync_debug_mode("warn") PyTorch framework confirms this: /home/coder/vllm/venv/lib/python3.10/site-packages/torch/cuda/__init__.py:1067: UserWarning: Synchronization debug mode is a prototype feature and does not yet detect all synchronizing operations (Triggered internally at /pytorch/torch/csrc/cuda/Module.cpp:915.)
torch._C._cuda_set_sync_debug_mode(debug_mode)
/home/coder/vllm/vllm/model_executor/layers/utils.py:52: UserWarning: called a synchronizing CUDA operation (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:152.)
logits[logits > 0] /= torch.where(prompt_mask | output_mask,
/home/coder/vllm/vllm/model_executor/layers/utils.py:54: UserWarning: called a synchronizing CUDA operation (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:152.)
logits[logits <= 0] *= torch.where(prompt_mask | output_mask,
/home/coder/vllm/vllm/v1/worker/gpu_model_runner.py:1153: UserWarning: called a synchronizing CUDA operation (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:152.)
valid_sampled_token_ids = sampled_token_ids.tolist() This seems to be a known issue and was encountered here: pytorch/pytorch#12461 nonzero that is called in this conversion has a legitimate synchronization - it is necessary to pass the information from the device about how many non-zero elements were found in the boolean index tensor, as this information would be later required on the cpu, to resize the index tensor, and to configure launch parameters/kernel arguments for subsequent kernels. I'm not sure this sync can be avoided, because if mask comes as a result of an operation on the GPU, CPU has no way of getting the number of nonzeros in the mask, which is objectively needed. By refactoring the code to avoid the indexing, we can remove the sync and allow much more of the sampling phase CPU work to overlap with the forward pass on the GPU, providing an 8% speedup to decoding for smaller models. Before: ============ Serving Benchmark Result ============
Successful requests: 100
Benchmark duration (s): 103.22
Total input tokens: 100000
Total generated tokens: 10000
Request throughput (req/s): 0.97
Output token throughput (tok/s): 96.88
Total Token throughput (tok/s): 1065.73
---------------Time to First Token----------------
Mean TTFT (ms): 37.21
Median TTFT (ms): 32.09
P99 TTFT (ms): 71.54
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 6.74
Median TPOT (ms): 6.67
P99 TPOT (ms): 7.20
---------------Inter-token Latency----------------
Mean ITL (ms): 6.74
Median ITL (ms): 6.69
P99 ITL (ms): 7.93
================================================== After: ============ Serving Benchmark Result ============
Successful requests: 100
Benchmark duration (s): 103.17
Total input tokens: 100000
Total generated tokens: 10000
Request throughput (req/s): 0.97
Output token throughput (tok/s): 96.93
Total Token throughput (tok/s): 1066.19
---------------Time to First Token----------------
Mean TTFT (ms): 35.62
Median TTFT (ms): 30.71
P99 TTFT (ms): 60.89
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 6.18
Median TPOT (ms): 6.11
P99 TPOT (ms): 6.50
---------------Inter-token Latency----------------
Mean ITL (ms): 6.18
Median ITL (ms): 6.12
P99 ITL (ms): 7.43
================================================== Benchmark: VLLM_FLASH_ATTN_VERSION=3 VLLM_USE_V1=1 vllm serve Qwen/Qwen2.5-1.5B-Instruct --enable-prefix-caching --dtype float16 --disable-log-requests -O3
vllm bench serve \
--model Qwen/Qwen2.5-1.5B-Instruct \
--request-rate 1 \
--num-prompts 100 \
--random-input-len 1000 \
--random-output-len 100 \
--tokenizer Qwen/Qwen2.5-1.5B-Instruct \
--ignore-eos Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 3 njhill, houseroad, and WoosukKwon reacted with thumbs up emoji 👀 1 mgoin reacted with eyes emoji All reactions 👍 3 reactions 👀 1 reaction Chanh Nguyen added 2 commits April 10, 2025 21:17 Fix penalties function causing CUDA sync … a319ec0 Signed-off-by: Chanh Nguyen <[email protected]> Fix penalties function causing CUDA sync … cab436d Signed-off-by: Chanh Nguyen <[email protected]> Copy link github-actions bot commented Apr 10, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . chanh marked this pull request as ready for review April 11, 2025 04:12 Merge branch 'main' into cnguyen/penalties dff03c5 chanh changed the title Speed up decode by remove synchronizing operation in sampler [Core] Speed up decode by remove synchronizing operation in sampler Apr 18, 2025 WoosukKwon self-assigned this Apr 21, 2025 WoosukKwon approved these changes Apr 21, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @chanh Sorry for the late review. This is really great! Nice optimization! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Apr 21, 2025 WoosukKwon enabled auto-merge (squash) April 21, 2025 16:28 Hide details View details WoosukKwon merged commit 299ebb6 into vllm-project : main Apr 21, 2025 61 checks passed Uh oh! There was an error while loading. Please reload this page . frieda-huang pushed a commit
to frieda-huang/vllm
that referenced
this pull request Apr 23, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … cdcb192 …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]>
Signed-off-by: Frieda (Jingying) Huang <[email protected]> jikunshang pushed a commit
to jikunshang/vllm
that referenced
this pull request Apr 29, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … 69e7495 …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]> lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … 603d269 …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]> adobrzyn pushed a commit
to HabanaAI/vllm-fork
that referenced
this pull request Apr 30, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … 56cdbf0 …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]>
Signed-off-by: Agata Dobrzyniewicz <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … 375f86a …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]>
Signed-off-by: Mu Huai <[email protected]> ckhordiasma mentioned this pull request May 14, 2025 nm vllm ent 0.8.5 sync red-hat-data-services/vllm#139 Merged minpeter pushed a commit
to minpeter/vllm
that referenced
this pull request Jun 24, 2025 [Core] Speed up decode by remove synchronizing operation in sampler ( v… … cd510d8 …llm-project#16436 )
Signed-off-by: Chanh Nguyen <[email protected]>
Co-authored-by: Chanh Nguyen <[email protected]>
Signed-off-by: minpeter <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:17
|
3092375e274e9e003961e600e10a6192d33ceaa0
|
https://github.com/vllm-project/vllm/pull/16432
| false | true | false | true |
PERF: throughput, throughput, throughput | TEST: test, test, test
|
Copy link Contributor p88h commented Apr 10, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . FIX #16185 ( link existing issues this PR will resolve ) This is a rebase of #16279 which had too entangled commits. Implements additional handling of MultimodalKwargs on top of #13790 Further improves memory usage on top of improvements in #16273 by another 50% Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 ywang96 and DarkLight1337 reacted with thumbs up emoji All reactions 👍 2 reactions p88h requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners April 10, 2025 21:02 Copy link github-actions bot commented Apr 10, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Apr 10, 2025 p88h force-pushed the serialize-multimodal-kwargs branch
from 3268c77 to 43d87ec Compare April 10, 2025 21:15 p88h mentioned this pull request Apr 10, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs #16279 Closed p88h force-pushed the serialize-multimodal-kwargs branch
from 43d87ec to f4832a7 Compare April 10, 2025 21:41 Copy link Member ywang96 commented Apr 10, 2025 @p88h This is amazing! Have you tried running some benchmarks to see the throughput performance impact of this PR? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author p88h commented Apr 10, 2025 @ywang96 I've added a benchmark table to the linked bug #16185 My benchmark focused on memory performance rather than throughput, and only used a single model. It should not really change throughput that much other than in cases that do run into memory issues, though. I'll try running some throughput checks tomorrow All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill reviewed Apr 10, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks @p88h ! I think this looks good. The main thing I think is to add custom serialization for the field . And we'll probably want to add a few more comments since it's tightly coupled with the custom tensor encoding format. Also, I haven't looked closely at the entire flow, but in the case of MMKs created from items, it might make sense to defer the population of their data (via the "reduce" operations). Since that will be repeated in the receiving process and causes extra cpu and mem overhead since tensors may get stacked etc. It would be nice if there was a way for this to happen lazily but I guess that depends on how the data is later accessed. cc @ywang96 @DarkLight1337 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . tests/v1/test_serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Member njhill commented Apr 11, 2025 Also, I haven't looked closely at the entire flow, but in the case of MMKs created from items, it might make sense to defer the population of their data (via the "reduce" operations). Since that will be repeated in the receiving process and causes extra cpu and mem overhead since tensors may get stacked etc. It would be nice if there was a way for this to happen lazily but I guess that depends on how the data is later accessed. FYI I've opened another PR to help with this: #16440 . It should in theory help all of the cases not just the multi-proc case. It would still be additionally beneficial to postpone doing this reduce operation until after being transferred to the engine though. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 reviewed Apr 11, 2025 View reviewed changes tests/v1/test_serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . xtknight mentioned this pull request Apr 11, 2025 [Performance]: MultiModalKwargs serialization has significant impact on E2E latency (w/ proof-of-concept patch) #16461 Closed 1 task Copy link Contributor Author p88h commented Apr 11, 2025 I have some experimental data with this PR in place. Interestingly it performs much better with zero-copy disabled In this new benchmark,` I am feeding gradually increasing document sets to the engine. Turns out custom serialization helps less than expected - I think previously it was augmented by the cache, but now all files are unique so results are a bit different. The 'mix' performance case measures running all prompts together (15 total, with 128 images total) after they have been initially processed one-by-one, so it's expected that it's performing much better / cached. config / benchmark case | 4 images | 8 images | 16 images | 32 images | t.max | t.mix
------------------------------+----------+----------+-----------+-----------+-------+-------
baseline (zero-copy disabled) | 3.55 GB | 5.11 GB | 9.96 GB | 22.54 GB | 90.4s | 44.1s
baseline (zero-copy enabled) | 3.50 GB | 5.01 GB | 9.87 GB | 22.56 GB | 75.3s | 39.4s
#16432 (zero-copy enabled) | 3.40 GB | 4.75 GB | 8.53 GB | 22.02 GB | 13.8s | 36.1s
#16432 (zero-copy disabled) | 3.28 GB | 3.95 GB | 4.76 GB | 5.85 GB | 14.4s | 36.3s All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . p88h force-pushed the serialize-multimodal-kwargs branch
from d56435a to 408f36b Compare April 11, 2025 12:03 mergify bot added documentation Improvements or additions to documentation ci/build tpu Related to Google TPUs labels Apr 11, 2025 p88h and others added 4 commits April 11, 2025 14:04 Implement efficient serialization of MultiModalKwargs … 7b6b7ba In addition to serializing base Tensors, this now allows to pass
Tensors embedded in MultiModalKwargs correctly.
Handles both V0 and V1 style args.
Improves memory usage with large multimodal payloads by a further
50% (but still not on par with single-threaded behavior).
Signed-off-by: Staszek Pasko <[email protected]> Apply suggestions from code review … 4bdd16e Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Staszek Pasko <[email protected]> Additional fixes after code review … e5931af Signed-off-by: Staszek Pasko <[email protected]> Fix some broken bits & reformat … 6641584 Signed-off-by: Staszek Pasko <[email protected]> p88h force-pushed the serialize-multimodal-kwargs branch
from 408f36b to 6641584 Compare April 11, 2025 12:05 mergify bot removed
the tpu Related to Google TPUs label Apr 11, 2025 Add custom support for MultiModalFieldConfig, less pickle … a94df99 Signed-off-by: Staszek Pasko <[email protected]> mergify bot added
the multi-modality Related to multi-modality (#4194) label Apr 11, 2025 45 hidden items Load more… p88h added 2 commits April 16, 2025 07:33 Merge branch 'vllm-project:main' into serialize-multimodal-kwargs d7cb694 style … 7511262 Signed-off-by: Staszek Pasko <[email protected]> p88h requested a review
from njhill April 16, 2025 09:39 Merge branch 'vllm-project:main' into serialize-multimodal-kwargs 97188e6 njhill reviewed Apr 16, 2025 View reviewed changes vllm/v1/serial_utils.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . remove unnecessary comment … 48ab2d9 Signed-off-by: Staszek Pasko <[email protected]> p88h requested a review
from njhill April 16, 2025 15:00 njhill approved these changes Apr 16, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for the great work @p88h ! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 1 p88h reacted with hooray emoji All reactions 🎉 1 reaction njhill added ready ONLY add when PR is ready to merge/full CI is needed performance Performance-related issues labels Apr 16, 2025 p88h force-pushed the serialize-multimodal-kwargs branch
from 1f2779a to 48ab2d9 Compare April 16, 2025 19:35 Merge branch 'vllm-project:main' into serialize-multimodal-kwargs a60333e Copy link Member njhill commented Apr 16, 2025 Looks like a CI test is failing - but unfortunately the root cause is obscured (the OOM failure of the subsequent test is a result of improper cleanup after the original failure). This should hopefully be addressed by #11737 . In the meantime I can try running this test locally. p.s. there's no need to keep rebasing on latest main, this just causes all the tests to start over. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Accommodate floats in NestedTensors … 281f0f1 Signed-off-by: Nick Hill <[email protected]> Copy link Member njhill commented Apr 16, 2025 It turns out it was because sometimes MMKwargs can contain non-tensor data (specifically "second_per_grid_ts": [1.0] in this case). So I pushed an update to allow floats and ints too. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details njhill merged commit 3092375 into vllm-project : main Apr 17, 2025 42 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author p88h commented Apr 17, 2025 Thank you ! I was about to go back to debugging this morning ;) 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . lionelvillard pushed a commit
to lionelvillard/vllm
that referenced
this pull request Apr 17, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … c2df8d3 …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]> DarkLight1337 mentioned this pull request Apr 17, 2025 [Bug]: Unable to deploy Qwen2.5-VL-3B-Instruct after updating vLLM to latest version #16791 Closed 1 task p88h mentioned this pull request Apr 17, 2025 [Bug]: Mistral 3.1 Small Image inference is broken on 0.8.4 #16675 Closed 1 task njhill mentioned this pull request Apr 18, 2025 [BugFix] Support bf16 in zero-copy tensor serialization #16860 Closed p88h deleted the serialize-multimodal-kwargs branch April 18, 2025 20:22 yangw-dev pushed a commit
to yangw-dev/vllm
that referenced
this pull request Apr 21, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … 2f35558 …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Yang Wang <[email protected]> DarkLight1337 mentioned this pull request Apr 28, 2025 [Feature]: Performance issue, when using Qwen2.5-VL-32B-Instruct model for multi graph inference #17297 Closed 1 task jikunshang pushed a commit
to jikunshang/vllm
that referenced
this pull request Apr 29, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … 6fcc767 …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]> lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … 365538f …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]> adobrzyn pushed a commit
to HabanaAI/vllm-fork
that referenced
this pull request Apr 30, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … 0c1294a …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Agata Dobrzyniewicz <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [V1][Performance] Implement custom serializaton for MultiModalKwargs … … f09c519 …[Rebased] ( vllm-project#16432 )
Signed-off-by: Staszek Pasko <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Mu Huai <[email protected]> ckhordiasma mentioned this pull request May 14, 2025 nm vllm ent 0.8.5 sync red-hat-data-services/vllm#139 Merged Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:22
|
93e5f3c5fb4a4bbd49610efb96aad30df95fca66
|
https://github.com/vllm-project/vllm/pull/16484
| false | true | false | true |
PERF: improvement, improvement, improvement | TEST: test, test, CI
|
Copy link Contributor SnowCharmQ commented Apr 11, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR enhances the performance of the method _prepare_inputs in gpu_model_runner.py by replacing the original Python loop implementation with map and numpy array operations. On my clusters, it can achieve nearly a twofold performance improvement. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Optimize prepare inputs for GPU model runner 7018c25 SnowCharmQ requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners April 11, 2025 13:00 Copy link github-actions bot commented Apr 11, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Apr 11, 2025 Format code … d150bd3 Signed-off-by: snowcharm <[email protected]> njhill reviewed Apr 11, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks @SnowCharmQ , this is great! On my clusters, it can achieve nearly a twofold performance improvement. Presumably you're referring to the improvement of this loop, not end-to-end? :) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author SnowCharmQ commented Apr 11, 2025 Thanks @SnowCharmQ , this is great! On my clusters, it can achieve nearly a twofold performance improvement. Presumably you're referring to the improvement of this loop, not end-to-end? :) Hi @njhill , the improvement refers to the loop exactly. Sorry for the confusion :) 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Improve readability … ef6fdea Co-authored-by: Nick Hill <[email protected]> njhill added ready ONLY add when PR is ready to merge/full CI is needed performance Performance-related issues labels Apr 11, 2025 njhill approved these changes Apr 11, 2025 View reviewed changes Copy link Contributor Author SnowCharmQ commented Apr 12, 2025 Hi @njhill , I noticed an issue with the CI check. Do you have any idea what might be going wrong and how it can be resolved? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details DarkLight1337 merged commit 93e5f3c into vllm-project : main Apr 12, 2025 56 of 57 checks passed Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented Apr 12, 2025 I retried the test and it passes now 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . SnowCharmQ deleted the perf-runner branch April 19, 2025 08:44 yangw-dev pushed a commit
to yangw-dev/vllm
that referenced
this pull request Apr 21, 2025 [Perf] Optimize Preparing Inputs for GPU Model Runner ( vllm-project#1… … 17c1504 …6484 )
Signed-off-by: snowcharm <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Yang Wang <[email protected]> jikunshang pushed a commit
to jikunshang/vllm
that referenced
this pull request Apr 29, 2025 [Perf] Optimize Preparing Inputs for GPU Model Runner ( vllm-project#1… … 7b6eb48 …6484 )
Signed-off-by: snowcharm <[email protected]>
Co-authored-by: Nick Hill <[email protected]> lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [Perf] Optimize Preparing Inputs for GPU Model Runner ( vllm-project#1… … 3e46b61 …6484 )
Signed-off-by: snowcharm <[email protected]>
Co-authored-by: Nick Hill <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Perf] Optimize Preparing Inputs for GPU Model Runner ( vllm-project#1… … 5e88ae2 …6484 )
Signed-off-by: snowcharm <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:25
|
bd6028d6b0bbc0c569ece0535067081c5e8bdc14
|
https://github.com/vllm-project/vllm/pull/16512
| false | true | false | true |
PERF: latency, latency, latency | TEST: test, CI, CI
|
Copy link Member mgoin commented Apr 11, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Clear speedup for latency case, adapted from sgl-project/sglang@ 86a876d (thank you!) Llama Scout FP8 on 2xH100, input/output=1000/1000 batch_size=1 # benchmark
python benchmarks/benchmark_latency.py --model RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic --max-model-len 8000 --tensor-parallel-size 2 --input-len 1000 --output-len 1000 --batch-size 1 --num-iters-warmup 5 --num-iters 5
# torch.topk
Avg latency: 12.93838309822604 seconds
10% percentile latency: 12.891319572227076 seconds
25% percentile latency: 12.904249292099848 seconds
50% percentile latency: 12.921604027971625 seconds
75% percentile latency: 12.932637538062409 seconds
90% percentile latency: 13.00348993963562 seconds
99% percentile latency: 13.046001380579547 seconds
# fast_topk
Avg latency: 12.725665437569841 seconds
10% percentile latency: 12.664348530210555 seconds
25% percentile latency: 12.665923552820459 seconds
50% percentile latency: 12.72062187595293 seconds
75% percentile latency: 12.734881401993334 seconds
90% percentile latency: 12.800113665964455 seconds
99% percentile latency: 12.839253024347126 seconds Llama Scout FP8 on 2xH100, input/output=1000/1000 batch_size=32 # benchmark
python benchmarks/benchmark_latency.py --model RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic --max-model-len 8000 --tensor-parallel-size 2 --input-len 1000 --output-len 1000 --batch-size 32 --num-iters-warmup 3 --num-iters 3
# torch.topk
Avg latency: 23.997261434715863 seconds
10% percentile latency: 23.722837531426922 seconds
25% percentile latency: 23.844304106081836 seconds
50% percentile latency: 24.04674839717336 seconds
75% percentile latency: 24.174962244578637 seconds
90% percentile latency: 24.251890553021802 seconds
99% percentile latency: 24.298047538087705 seconds
# fast_topk
Avg latency: 23.815591983729973 seconds
10% percentile latency: 23.6753818389494 seconds
25% percentile latency: 23.733925551641732 seconds
50% percentile latency: 23.831498406128958 seconds
75% percentile latency: 23.905211627017707 seconds
90% percentile latency: 23.949439559550957 seconds
99% percentile latency: 23.975976319070906 seconds Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 1 yeqcharlotte reacted with hooray emoji All reactions 🎉 1 reaction Optimized topk for topk=1 (Llama-4) … a22a82d Signed-off-by: mgoin <[email protected]> Copy link github-actions bot commented Apr 11, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill approved these changes Apr 11, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Wow, nice! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member njhill commented Apr 11, 2025 @mgoin could we use this for other moes too? e.g. in https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L886 ? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author mgoin commented Apr 11, 2025 @njhill unfortunately most other moes do not use a topk=1 AFAIK, but maybe the overhead is minimal enough to use just in case 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin added performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed labels Apr 12, 2025 houseroad approved these changes Apr 12, 2025 View reviewed changes Copy link Collaborator houseroad left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Oh, nice trick. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details DarkLight1337 merged commit bd6028d into vllm-project : main Apr 12, 2025 64 checks passed Uh oh! There was an error while loading. Please reload this page . yangw-dev pushed a commit
to yangw-dev/vllm
that referenced
this pull request Apr 21, 2025 Optimized topk for topk=1 (Llama-4) ( vllm-project#16512 ) … 751844d Signed-off-by: mgoin <[email protected]>
Signed-off-by: Yang Wang <[email protected]> jikunshang pushed a commit
to jikunshang/vllm
that referenced
this pull request Apr 29, 2025 Optimized topk for topk=1 (Llama-4) ( vllm-project#16512 ) … e6bca68 Signed-off-by: mgoin <[email protected]> lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 Optimized topk for topk=1 (Llama-4) ( vllm-project#16512 ) … ef7a8ef Signed-off-by: mgoin <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 Optimized topk for topk=1 (Llama-4) ( vllm-project#16512 ) … 7987452 Signed-off-by: mgoin <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:28
|
b10e51989551cd80dd74079429ccf91f0807bd92
|
https://github.com/vllm-project/vllm/pull/16135
| false | false | false | true |
TEST: test, CI, CI
|
Copy link Collaborator WoosukKwon commented Apr 6, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Minor optimizations Avoid redundant dictionary lookups cached_block_hash_to_block[block_hash] Avoid creating a list by using next Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions WoosukKwon added 2 commits April 6, 2025 11:11 [V1][Minor] Optimize get_cached_block … 94d9874 Signed-off-by: Woosuk Kwon <[email protected]> Avoid creating list … 05a922a Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners April 6, 2025 18:19 Copy link github-actions bot commented Apr 6, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Apr 6, 2025 WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Apr 6, 2025 njhill approved these changes Apr 6, 2025 View reviewed changes comaniac approved these changes Apr 6, 2025 View reviewed changes comaniac enabled auto-merge (squash) April 6, 2025 19:06 Hide details View details comaniac merged commit b10e519 into main Apr 6, 2025 61 checks passed Uh oh! There was an error while loading. Please reload this page . comaniac deleted the minor-cache-opt branch April 6, 2025 20:48 lengrongfu pushed a commit
to lengrongfu/vllm
that referenced
this pull request Apr 7, 2025 [V1][Minor] Optimize get_cached_block ( vllm-project#16135 ) 5aaddbc yangw-dev pushed a commit
to yangw-dev/vllm
that referenced
this pull request Apr 21, 2025 [V1][Minor] Optimize get_cached_block ( vllm-project#16135 ) … eeeccf2 Signed-off-by: Yang Wang <[email protected]> lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [V1][Minor] Optimize get_cached_block ( vllm-project#16135 ) ff21ef5 RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [V1][Minor] Optimize get_cached_block ( vllm-project#16135 ) … 3d2f574 Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:31
|
35fad35a485eac9195c510731ba4a9d297dfd963
|
https://github.com/vllm-project/vllm/pull/15478
| false | true | false | true |
PERF: Faster, Faster, Faster | TEST: test, test, test
|
Copy link Member njhill commented Mar 25, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . When there's top-k in the batch but no top-p. For 128k vocab, 1024 batch size, 500 ops on A100, where max top k is 10: Before: 11.571 sec After: 2.136 sec Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [V1][Sampler] Faster top-k only implementation … bcee0c4 Signed-off-by: Nick Hill <[email protected]> njhill requested review from WoosukKwon , robertgshaw2-redhat , ywang96 , comaniac and alexm-redhat as code owners March 25, 2025 15:43 Copy link github-actions bot commented Mar 25, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Mar 25, 2025 njhill mentioned this pull request Mar 25, 2025 [V1][TPU] Speed up top-k on TPU by using torch.topk #15242 Merged njhill commented Mar 25, 2025 View reviewed changes vllm/v1/sample/ops/topk_topp_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . NickLucche approved these changes Mar 25, 2025 View reviewed changes Copy link Contributor NickLucche left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Tested on TPU this won't work out of the box due to some broadcasting issue. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Also in-place cumsum for top-p … 7156150 Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Mar 25, 2025 @NickLucche that's strange. Which op has that issue? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor NickLucche commented Mar 25, 2025 Not too surprising, torch xla has more constraining rules on broadcasting. This is the first error I have encountered F0325 16:28:32.957930 1304047 debug_macros.h:21] Non-OK-status: status.status()
Status: INVALID_ARGUMENT: Input dimension should be either 1 or equal to the output dimension it is broadcasting into; the 0th operand dimension is 4, the 0th output dimension is 1.
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
xla::Shape const* ConsumeValue<xla::Shape const*>(absl::lts_20230802::StatusOr<xla::Shape const*>&&)
torch_xla::ShapeHelper::ShapeOfXlaOp(xla::XlaOp)
torch_xla::InferOutputShape(absl::lts_20230802::Span<xla::Shape const>, std::function<xla::XlaOp (absl::lts_20230802::Span<xla::XlaOp const>)> const&)
torch_xla::XlaNode::GetOpShape(std::function<xla::Shape ()> const&) const
torch_xla::XlaNode::XlaNode(torch::lazy::OpKind, c10::ArrayRef<torch::lazy::Value>, std::function<xla::Shape ()> const&, unsigned long, torch::lazy::hash_t)
torch_xla::Gather::Gather(torch::lazy::Value const&, long, torch::lazy::Value const&)
std::shared_ptr<torch::lazy::Node> torch_xla::MakeNode<torch_xla::Gather, torch::lazy::Value, long&, torch::lazy::Value>(torch::lazy::Value&&, long&, torch::lazy::Value&&)
torch_xla::tensor_methods::gather(c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > const&, long, c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > const&)
torch_xla::XLANativeFunctions::gather(at::Tensor const&, long, at::Tensor const&, bool)
at::_ops::gather::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&, bool)
at::_ops::gather::call(at::Tensor const&, long, at::Tensor const&, bool) on the .gather op. I expanded k but then ran into another issue. 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon reviewed Mar 25, 2025 View reviewed changes vllm/v1/sample/ops/topk_topp_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Add comments … 1feffb0 Signed-off-by: Nick Hill <[email protected]> WoosukKwon reviewed Mar 25, 2025 View reviewed changes vllm/v1/sample/ops/topk_topp_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/sample/ops/topk_topp_sampler.py @@ -138,8 +138,25 @@ def apply_top_k_top_p( This function sorts the logits tensor, which can be slow for large batches. """ if k is None and p is None: if p is None: if k is None: Copy link Collaborator WoosukKwon Mar 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do we have a unit test checking the correctness of this? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author njhill Mar 26, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment We should really have blanket coverage for this kind of thing, including different combinations of parameters (i.e. top-k with/without top-p etc.). I'm not sure whether we do though. I will check and add a unit test to compare the two impls. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Add comments about in-place logits updates. … be9e5d7 Signed-off-by: Nick Hill <[email protected]> NickLucche suggested changes Mar 26, 2025 View reviewed changes Copy link Contributor NickLucche left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I tested this version again today and it's working on TPU too, nice one @njhill thanks! I was wondering could we still factor-out this topk opt into its own function so I can call it from TPU side? We agreed with @WoosukKwon to try and keep things separated, I'd like to keep forward_tpu around. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor NickLucche commented Mar 26, 2025 Something like a5bf849 #diff-6047245d864bf5fd68b5b947b735beca94723bad40d20bfc0803d9b3eea5c1edR121-R136 . Wdyt? Of course I'd wait for this PR to land and then rebase, I've shamelessly just copy-pasted your code there. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . NickLucche mentioned this pull request Mar 26, 2025 [V1][TPU] Enable Top K #15489 Merged njhill added 2 commits March 26, 2025 07:17 Add test … c09dd00 Signed-off-by: Nick Hill <[email protected]> Move to separate function per @NickLucche 's request … e47f5b9 Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Mar 26, 2025 Thanks @NickLucche , I've split into separate function. And @WoosukKwon I've added a correctness test. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 26, 2025 WoosukKwon approved these changes Mar 26, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Thanks for addressing my comments. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details njhill merged commit 35fad35 into vllm-project : main Mar 26, 2025 39 checks passed Uh oh! There was an error while loading. Please reload this page . njhill deleted the torch-topk branch March 26, 2025 17:56 hyeygit mentioned this pull request Mar 30, 2025 [V1][TPU] TPU-optimized top-p implementation (avoids scattering). #15736 Merged Copy link Contributor hyeygit commented Mar 30, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . @njhill really neat idea to threshold the logits! However I think one corner case where this would break is if there are duplicate elements in the logit that equal the cut off value (i.e. top_k_mask ). For example, given an input of [1, 2, 2, 2, 3] and k=3 , the current apply_top_k_only would return [-inf, 2, 2, 2, 3] while the correct result should be [-inf, -inf, 2, 2, 3] . In #15736 I use a similar thresholding logic for top-p, but introduced a small random perturbation to break the ties. Maybe the same idea can be used here for top-k as well. 👍 1 NickLucche reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . NickLucche mentioned this pull request Apr 1, 2025 [Core] Optimize topp/topk calculation in sampler #12156 Closed Alex4210987 pushed a commit
to LeiWang1999/vllm-bitblas
that referenced
this pull request Apr 5, 2025 [V1][Sampler] Faster top-k only implementation ( vllm-project#15478 ) … 0e57df7 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xinyuxiao <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [V1][Sampler] Faster top-k only implementation ( vllm-project#15478 ) … c116565 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [V1][Sampler] Faster top-k only implementation ( vllm-project#15478 ) … 2b30424 Signed-off-by: Nick Hill <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [V1][Sampler] Faster top-k only implementation ( vllm-project#15478 ) … eaded4b Signed-off-by: Nick Hill <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [V1][Sampler] Faster top-k only implementation ( vllm-project#15478 ) … c7eb537 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:35
|
9d72daf4ced05a5fec1ad8ea2914a39296f402da
|
https://github.com/vllm-project/vllm/pull/15156
| false | true | false | true |
PERF: qps, profiling | TEST: test, test, test
|
Copy link Member njhill commented Mar 19, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Queue operations showed up when profiling high qps. Since we coalesce RequestOutput objects, we don't need to use an actual queue. This changes to merge the outputs when added rather than when removed. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [V1][Perf] Simpler request output queues … e852802 Since we coalesce RequestOutput objects we don't need to use an actual queue.
This changes to merge the outputs when added rather than when removed.
Signed-off-by: Nick Hill <[email protected]> njhill requested review from WoosukKwon , robertgshaw2-redhat , ywang96 , comaniac and alexm-redhat as code owners March 19, 2025 19:57 Copy link github-actions bot commented Mar 19, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Mar 19, 2025 njhill mentioned this pull request Mar 19, 2025 [BugFix][V1] Fix parallel sampling finishing/aborts #14512 Merged njhill added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 19, 2025 houseroad reviewed Mar 21, 2025 View reviewed changes vllm/v1/engine/output_processor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Mar 21, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @njhill . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Mar 21, 2025 houseroad reviewed Mar 21, 2025 View reviewed changes Copy link Collaborator houseroad left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Looks good to me. Wondering if we should have some e2e test? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Merge remote-tracking branch 'origin/main' into queueless-output … 8fe1e45 Signed-off-by: Nick Hill <[email protected]>
# Conflicts:
# vllm/v1/engine/async_llm.py
# vllm/v1/engine/llm_engine.py
# vllm/v1/engine/parallel_sampling.py mergify bot removed
the needs-rebase label Mar 21, 2025 comaniac approved these changes Mar 21, 2025 View reviewed changes Copy link Collaborator comaniac left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM. Only a nit. A unit test is definitely nice to have. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction vllm/v1/engine/output_processor.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill added
the needs-tests Tests needed for this PR label Mar 24, 2025 robertgshaw2-redhat reviewed Mar 24, 2025 View reviewed changes vllm/v1/engine/output_processor.py else: self.output = output async def get(self) -> RequestOutput: Copy link Collaborator robertgshaw2-redhat Mar 24, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do you think we should have an invariant that output is not None if self.ready.wait() is true? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author njhill Mar 24, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment That is the case but I'm not sure what you're suggesting to add here? self.ready.wait() just waits for the condition to be set, it can only ever return True (not even sure why it returns that rather than None ). And then we immediately check self.output again before continuing. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions robertgshaw2-redhat and others added 4 commits March 24, 2025 17:47 added unit test … 47e611d Signed-off-by: [email protected] <[email protected]> removed stray file … af4e13b Signed-off-by: [email protected] <[email protected]> updated … 7382f62 Signed-off-by: [email protected] <[email protected]> Merge pull request #5 from robertgshaw2-redhat/add-test … 12b2758 added unit test njhill removed
the needs-tests Tests needed for this PR label Mar 24, 2025 njhill added 2 commits March 24, 2025 11:18 Update docstring with more detail … 639386c Signed-off-by: Nick Hill <[email protected]> Merge remote-tracking branch 'refs/remotes/origin/main' into queueles… … 4612dc5 …s-output Copy link Member Author njhill commented Mar 24, 2025 Thanks for adding a test @robertgshaw2-redhat ! This should be good to merge now once the CI finishes. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat enabled auto-merge (squash) March 24, 2025 19:28 Copy link Collaborator robertgshaw2-redhat commented Mar 24, 2025 Looks good to me. Wondering if we should have some e2e test? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . robertgshaw2-redhat closed this Mar 24, 2025 auto-merge was automatically disabled March 24, 2025 19:29 Pull request was closed robertgshaw2-redhat reopened this Mar 24, 2025 robertgshaw2-redhat enabled auto-merge (squash) March 24, 2025 19:30 Hide details View details robertgshaw2-redhat merged commit 9d72daf into vllm-project : main Mar 24, 2025 36 of 38 checks passed Uh oh! There was an error while loading. Please reload this page . njhill deleted the queueless-output branch March 24, 2025 22:44 erictang000 pushed a commit
to erictang000/vllm
that referenced
this pull request Mar 25, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … 4739656 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]> wrmedford pushed a commit
to wrmedford/vllm
that referenced
this pull request Mar 26, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … e13c5d5 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Signed-off-by: Wes Medford <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … e5e7849 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed lk-chen pushed a commit
to lk-chen/vllm
that referenced
this pull request Apr 29, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … 6a3df39 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … 7dcaa26 Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [V1][Perf] Simpler request output queues ( vllm-project#15156 ) … 048639f Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:38
|
296f927f2493908984707354e3cc5d7b2e41650b
|
https://github.com/vllm-project/vllm/pull/14857
| true | true | true | true |
LM_EVAL: lm-eval, lm-eval, lm-eval | PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, serving, Serving | TEST: test, test, test
|
Copy link Contributor cyang49 commented Mar 15, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This is a re-attempt to fix mamba2's excessive memory copies. The previous solution failed due to difference in semantics when indexing tensor with tensor. This new solution directly utilizes indexing with state_indices_tensor to create tensor views and simplified the code without over-engineering. FIX #14778 The results from benchmark_serving on single H100-80GB GPU (Actually I found high variance of throughput numbers from consecutive tests of the same code base when using this benchmark. Not sure if this is meaningful to report? @njhill @tlrmchlsmth ) Benchmark serving main ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 291.76
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 3.43
Output token throughput (tok/s): 679.81
Total Token throughput (tok/s): 1417.39
---------------Time to First Token----------------
Mean TTFT (ms): 108636.82
Median TTFT (ms): 96115.48
P99 TTFT (ms): 276325.38
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 409.48
Median TPOT (ms): 427.24
P99 TPOT (ms): 655.84
---------------Inter-token Latency----------------
Mean ITL (ms): 352.50
Median ITL (ms): 606.12
P99 ITL (ms): 969.64
================================================== Benchmark serving with this PR ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 252.11
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 3.97
Output token throughput (tok/s): 786.73
Total Token throughput (tok/s): 1640.33
---------------Time to First Token----------------
Mean TTFT (ms): 97161.98
Median TTFT (ms): 94360.96
P99 TTFT (ms): 237572.12
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 355.17
Median TPOT (ms): 381.49
P99 TPOT (ms): 548.15
---------------Inter-token Latency----------------
Mean ITL (ms): 306.68
Median ITL (ms): 501.06
P99 ITL (ms): 750.59
================================================== lm-eval main |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.22|± |0.0416|
| | |strict-match | 5|exact_match|↑ | 0.32|± |0.0469| lm-eval with this PR |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.22|± |0.0416|
| | |strict-match | 5|exact_match|↑ | 0.32|± |0.0469| cc @fabianlim @yury-tokpanov Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 fabianlim and yury-tokpanov reacted with thumbs up emoji All reactions 👍 2 reactions Copy link github-actions bot commented Mar 15, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 15, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Experiment to test semantics of calling zero_() on indexed tensor >>> import torch
>>> x = torch.randperm(10)
>>> y = torch.randperm(10)
>>> x
tensor([5, 4, 6, 2, 0, 3, 8, 9, 7, 1])
>>> y
tensor([6, 8, 3, 2, 7, 5, 9, 4, 1, 0])
>>> x[y<5].zero_()
tensor([0, 0, 0, 0, 0])
>>> x
tensor([5, 4, 6, 2, 0, 3, 8, 9, 7, 1])
>>> x[y<5] = 0
>>> x
tensor([5, 4, 0, 0, 0, 3, 8, 0, 0, 0]) From this experiment, It seems that zero_() wouldn't give the right results? The zero init code should be the following instead? This would be index_put_ if has_initial_states is not None and torch.any(
has_initial_states):
zero_init_indices = mamba_cache_params.state_indices_tensor[
~has_initial_states]
mamba_cache_params.ssm_state[zero_init_indices] = 0
initial_states = mamba_cache_params.ssm_state[
mamba_cache_params.state_indices_tensor] Another reference states: The copy is performed right away – but note the exception to this (mentioned in the quoted documentation) when you are assigning to an indexed tensor. lm-eval results with this change |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.22|± |0.0416|
| | |strict-match | 5|exact_match|↑ | 0.32|± |0.0469| benchmark results with this change ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 250.68
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 3.99
Output token throughput (tok/s): 791.23
Total Token throughput (tok/s): 1649.71
---------------Time to First Token----------------
Mean TTFT (ms): 95232.94
Median TTFT (ms): 85040.17
P99 TTFT (ms): 231833.63
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 337.99
Median TPOT (ms): 351.17
P99 TPOT (ms): 522.21
---------------Inter-token Latency----------------
Mean ITL (ms): 292.12
Median ITL (ms): 494.48
P99 ITL (ms): 730.20
================================================== All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill approved these changes Mar 17, 2025 View reviewed changes Copy link Member njhill left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, thanks @cyang49 ! I've run into similar issue with in-place updates in the past Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 1 cyang49 reacted with rocket emoji All reactions 🚀 1 reaction Copy link Contributor yury-tokpanov commented Mar 18, 2025 how are you deploying your model's server? Seems like Bamba config lacks max model length, so vllm picks up something really big and enables chunked prefill, which is slow. Just setting --max-model-len 4096 is enough to disable chunked prefill: vllm serve ibm-ai-platform/Bamba-9B --dtype float16 --gpu-memory-utilization 0.9 --max-model-len 4096 . Without chunked prefill, I'm getting much better and more stable numbers for serving metrics. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth approved these changes Mar 20, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM and confirmed the gsm8k results on my end this time Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 20, 2025 tlrmchlsmth enabled auto-merge (squash) March 20, 2025 15:15 Copy link Contributor Author cyang49 commented Mar 20, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Build failed.. will try rebasing main [2025-03-20T15:54:50Z] FAILED tool_use/test_chat_completions.py::test_chat_completion_with_tools[granite-3.0-8b] - AssertionError: assert 'Of course! H...p everything!' == 'Of course! H...p everything!'
[2025-03-20T15:54:50Z]
[2025-03-20T15:54:50Z] - Of course! Here's a joke for you: Why don't scientists trust atoms? Because they make up everything!
[2025-03-20T15:54:50Z] + Of course! Here's a joke for you:
[2025-03-20T15:54:50Z] +
[2025-03-20T15:54:50Z] + Why don't scientists trust atoms?
[2025-03-20T15:54:50Z] +
[2025-03-20T15:54:50Z] + Because they make up everything! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cyang49 added 2 commits March 20, 2025 16:10 simplify and optimize mamba2 code that caused flurry of memcpys … d0a7427 Signed-off-by: Chih-Chieh-Yang <[email protected]> Use assignment instead of zero_ on indexed ssm_state … 2807c52 Signed-off-by: Chih-Chieh-Yang <[email protected]> auto-merge was automatically disabled March 20, 2025 20:10 Head branch was pushed to by a user without write access cyang49 force-pushed the pr_mamba2_mem_fix branch
from 0f41a64 to 2807c52 Compare March 20, 2025 20:10 tlrmchlsmth enabled auto-merge (squash) March 20, 2025 20:32 Copy link Collaborator tlrmchlsmth commented Mar 20, 2025 Ok! If it fails again, let's take a look at the failures and force merge if unrelated (feel free to ping me on this @cyang49 ) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 20, 2025 @tlrmchlsmth failed again on V1 test of Qwen.. :( All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details vllm-bot merged commit 296f927 into vllm-project : main Mar 21, 2025 32 of 35 checks passed Uh oh! There was an error while loading. Please reload this page . cyang49 deleted the pr_mamba2_mem_fix branch March 24, 2025 22:00 erictang000 pushed a commit
to erictang000/vllm
that referenced
this pull request Mar 25, 2025 [Model] RE: Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnec… … 93fab96 …essary Memory Copies ( vllm-project#14857 )
Signed-off-by: Chih-Chieh-Yang <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Model] RE: Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnec… … 0dbd3df …essary Memory Copies ( vllm-project#14857 )
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Model] RE: Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnec… … 252cff0 …essary Memory Copies ( vllm-project#14857 )
Signed-off-by: Chih-Chieh-Yang <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Model] RE: Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnec… … 97055ac …essary Memory Copies ( vllm-project#14857 )
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:42
|
22d33baca2c0c639cfd45c48e99803e56c3efa74
|
https://github.com/vllm-project/vllm/pull/15150
| false | false | true | true |
SERVING: FrontEnd, FrontEnd, FrontEnd | TEST: test, test, CI
|
Copy link Member njhill commented Mar 19, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Avoid the merging overhead in most common case. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [Misc][Perf] merge_async_iterators fast-path for single-prompt requests … c1fe348 Avoid the merging overhead in most common case.
Signed-off-by: Nick Hill <[email protected]> Copy link github-actions bot commented Mar 19, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill changed the title [Misc][Perf] merge_async_iterators fast-path for single-prompt requests [FrontEnd][Perf] merge_async_iterators fast-path for single-prompt requests Mar 19, 2025 robertgshaw2-redhat approved these changes Mar 19, 2025 View reviewed changes robertgshaw2-redhat enabled auto-merge (squash) March 19, 2025 18:39 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 19, 2025 Hide details View details robertgshaw2-redhat merged commit 22d33ba into vllm-project : main Mar 19, 2025 43 checks passed Uh oh! There was an error while loading. Please reload this page . njhill deleted the single-generator branch March 19, 2025 22:41 lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [FrontEnd][Perf] merge_async_iterators fast-path for single-prompt … … bed8d39 …requests ( vllm-project#15150 )
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [FrontEnd][Perf] merge_async_iterators fast-path for single-prompt … … 3f204af …requests ( vllm-project#15150 )
Signed-off-by: Nick Hill <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [FrontEnd][Perf] merge_async_iterators fast-path for single-prompt … … f205d3b …requests ( vllm-project#15150 )
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:45
|
99abb8b650c66664cdc84d815b7f306f33bd9881
|
https://github.com/vllm-project/vllm/pull/14930
| true | true | false | true |
LM_EVAL: GSM8K | PERF: Throughput, throughput | TEST: test, test, testing
|
Copy link Collaborator WoosukKwon commented Mar 17, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes the rejection sampler in #13933 with custom Triton kernels. By using the Triton kernels, the PR brings the following benefits: Now we use the flattened shape [num_tokens, vocab_size] for the logits tensors, instead of [batch_size, max_spec_len, vocab_size] . This reduces the GPU memory usage a lot. Zero synchronization between CPU and GPU. Remove inefficient data movement (i.e., a bunch of cat , gather , etc.) (Arguably) easier-to-read code Performance benchmark: Llama 3.1 8B, ShareGPT, 1xH100, temperature 0.1 SD config: --speculative-model "[ngram]" --ngram_prompt_lookup_min 5 --ngram-prompt-lookup-max 5 --num_speculative_tokens 3 Throughput (reqs/s) main (w/o SD) 51.49 main (w/ SD) 54.41 This PR (w/ SD) 64.16 25% throughput increase compared to main w/o SD, and 18% increase compared to main w/ SD. Accuracy benchmark: GSM8K, Llama 3.1 8B Instruct, 5 shots Temperature Exact match w/o SD 0.0 75.7 1.0 50.9 w/ SD 0.0 75.9 1.0 51.8 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🎉 4 robertgshaw2-redhat, LiuXiaoxuanPKU, MARD1NO, and mlinmg reacted with hooray emoji All reactions 🎉 4 reactions WoosukKwon added 30 commits March 14, 2025 20:41 tmp … c09ae5e Signed-off-by: Woosuk Kwon <[email protected]> minor … e3f3513 Signed-off-by: Woosuk Kwon <[email protected]> fix shape … be535aa Signed-off-by: Woosuk Kwon <[email protected]> minor … be950c7 Signed-off-by: Woosuk Kwon <[email protected]> minor … 1fee177 Signed-off-by: Woosuk Kwon <[email protected]> Add parse_outputs … d30970e Signed-off-by: Woosuk Kwon <[email protected]> minor … 32fefa1 Signed-off-by: Woosuk Kwon <[email protected]> minor … 4a93973 Signed-off-by: Woosuk Kwon <[email protected]> minor … f2455fd Signed-off-by: Woosuk Kwon <[email protected]> kernel … fbba0ff Signed-off-by: Woosuk Kwon <[email protected]> kernel … 255d1ee Signed-off-by: Woosuk Kwon <[email protected]> fix … 22c9515 Signed-off-by: Woosuk Kwon <[email protected]> comment … c631935 Signed-off-by: Woosuk Kwon <[email protected]> minor … 566caea Signed-off-by: Woosuk Kwon <[email protected]> minor … c427ffd Signed-off-by: Woosuk Kwon <[email protected]> fix … d896f41 Signed-off-by: Woosuk Kwon <[email protected]> fix … cb8e699 Signed-off-by: Woosuk Kwon <[email protected]> fix … c0bcf5a Signed-off-by: Woosuk Kwon <[email protected]> fix … ae3d7fc Signed-off-by: Woosuk Kwon <[email protected]> fix … 412e2f4 Signed-off-by: Woosuk Kwon <[email protected]> remove … df66124 Signed-off-by: Woosuk Kwon <[email protected]> opt … 704da77 Signed-off-by: Woosuk Kwon <[email protected]> minor … 4f95ca9 Signed-off-by: Woosuk Kwon <[email protected]> opt softmax & fix recompilation … 803c9de Signed-off-by: Woosuk Kwon <[email protected]> minor … 9cc9349 Signed-off-by: Woosuk Kwon <[email protected]> remove envs … 2b69e51 Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-opt-rej d374d59 Merge branch 'main' into v1-opt-rej d4a6437 fix … 75e93aa Signed-off-by: Woosuk Kwon <[email protected]> fix … 5a86ff3 Signed-off-by: Woosuk Kwon <[email protected]> 24 hidden items Load more… WoosukKwon added 6 commits March 17, 2025 10:12 fix test … 8b7a398 Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-opt-rej b303722 Merge branch 'main' into v1-opt-rej a0440c8 comment … 40f334a Signed-off-by: Woosuk Kwon <[email protected]> comment … 6935bfd Signed-off-by: Woosuk Kwon <[email protected]> fix shape mismatch … 0baa33e Signed-off-by: Woosuk Kwon <[email protected]> LiuXiaoxuanPKU reviewed Mar 18, 2025 View reviewed changes Copy link Collaborator LiuXiaoxuanPKU left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Finished the rejection_sampler.py, will continue other files tonight Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/sample/rejection_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/sample/rejection_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/sample/rejection_sampler.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . LiuXiaoxuanPKU reviewed Mar 18, 2025 View reviewed changes vllm/v1/sample/rejection_sampler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . WoosukKwon added 4 commits March 18, 2025 12:17 Merge branch 'main' into v1-opt-rej 459b2fa fix docstrings … aaf2316 Signed-off-by: Woosuk Kwon <[email protected]> fix dtype … 531068e Signed-off-by: Woosuk Kwon <[email protected]> add comment … 69c88b8 Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon requested a review
from LiuXiaoxuanPKU March 18, 2025 19:29 LiuXiaoxuanPKU approved these changes Mar 18, 2025 View reviewed changes Copy link Collaborator LiuXiaoxuanPKU left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details WoosukKwon merged commit 99abb8b into main Mar 18, 2025 29 of 32 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-opt-rej branch March 18, 2025 21:31 youkaichao reviewed Mar 19, 2025 View reviewed changes vllm/v1/sample/ops/utils.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . CXIAAAAA mentioned this pull request Mar 19, 2025 [Feature]: Add likaixin/InstructCoder as spec decode benchmark dataset option #14045 Closed 1 task This was referenced Mar 21, 2025 [Bug]: v1 speculate decoding NgramProposer experiences service exceptions during stress testing #14742 Closed add last slot for the invalid_token in greedy rejection sampler, specdec #14519 Closed WoosukKwon mentioned this pull request Apr 2, 2025 [Bug]: [V1][SpecDec] RuntimeError: CUDA error: an illegal memory access was encountered #13673 Closed 1 task lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [V1][Spec Decode] Optimize Rejection Sampler with Triton Kernels ( vll… … f928001 …m-project#14930 )
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [V1][Spec Decode] Optimize Rejection Sampler with Triton Kernels ( vll… … 0e57658 …m-project#14930 )
Signed-off-by: Woosuk Kwon <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [V1][Spec Decode] Optimize Rejection Sampler with Triton Kernels ( vll… … 08577f8 …m-project#14930 )
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Mu Huai <[email protected]> mmyxym reviewed Aug 5, 2025 View reviewed changes vllm/v1/sample/rejection_sampler.py GREEDY_TEMPERATURE: tl.constexpr = -1 # Maximum number of speculative draft tokens allowed per request in a single # step. This value is chosen to be large enough to handle typical use cases. MAX_SPEC_LEN = 32 Copy link mmyxym Aug 5, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Hi @WoosukKwon , is there any limitation MAX_SPEC_LEN should be 32? Can it be larger? Thanks. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author WoosukKwon Aug 28, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @mmyxym There's no blocker to make it 64. Everything should work if you just change the number. I just thought 32 would be enough for all practical use cases. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mergify bot added
the speculative-decoding label Aug 5, 2025 Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:49
|
ccf02fcbaebb1a5b59dfc6c7cb64aa7cc489f04c
|
https://github.com/vllm-project/vllm/pull/14848
| true | false | false | true |
LM_EVAL: lm_eval, gsm8k, gsm8k | TEST: test, test, CI
|
Copy link Collaborator tlrmchlsmth commented Mar 15, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . …nnecessary Memory Copies ( #14778 )" This reverts commit fe66b34 . lm_eval --model vllm \
--model_args pretrained=ibm-ai-platform/Bamba-9B,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.8 \
--tasks gsm8k --limit 100 \
--batch_size auto main:
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0|± | 0|
| | |strict-match | 5|exact_match|↑ | 0|± | 0|
this PR:
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.22|± |0.0416|
| | |strict-match | 5|exact_match|↑ | 0.32|± |0.0469| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… … 9baec50 …nnecessary Memory Copies ( #14778 )"
This reverts commit fe66b34 . Copy link github-actions bot commented Mar 15, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . DarkLight1337 approved these changes Mar 15, 2025 View reviewed changes Hide details View details vllm-bot merged commit ccf02fc into main Mar 15, 2025 19 checks passed Uh oh! There was an error while loading. Please reload this page . vllm-bot deleted the revert_mamba_vmap branch March 15, 2025 03:45 lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… ( … 69ebbe1 vllm-project#14848 )
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… ( … 40cd8aa vllm-project#14848 ) RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… ( … 0129fdd vllm-project#14848 )
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:52
|
fe66b34728e5d383e3d19aefc544eeee808c99fb
|
https://github.com/vllm-project/vllm/pull/14778
| true | true | true | true |
LM_EVAL: lm_eval, lm-eval, gsm8k | PERF: TTFT, TTFT, TTFT | SERVING: Serving, Serving, Serving | TEST: test, test, test
|
Copy link Contributor cyang49 commented Mar 13, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . We found an issue while profiling vLLM running Bamba-9B model inference. Before: As can be seen in the Nsight Systems trace, per Mamba layer there are 2 phases where frequent memory copies happen. They are not necessary, or can be fused to reduce the number of copies. This PR fixes these issues. After: For the test case (offline mode, batch size=64, short prompt) I used, the fix reduces the prefill mamba layer latency from 5ms to 3ms. The results from benchmark_serving on single H100-80GB GPU Before: ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 283.22
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 3.53
Output token throughput (tok/s): 700.32
Total Token throughput (tok/s): 1460.17
---------------Time to First Token----------------
Mean TTFT (ms): 105627.40
Median TTFT (ms): 94728.54
P99 TTFT (ms): 264194.77
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 393.83
Median TPOT (ms): 413.59
P99 TPOT (ms): 615.34
---------------Inter-token Latency----------------
Mean ITL (ms): 339.72
Median ITL (ms): 589.56
P99 ITL (ms): 751.76
================================================== After: python benchmarks/benchmark_serving.py --model $MODEL_PATH --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split.json
============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 260.19
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 3.84
Output token throughput (tok/s): 762.29
Total Token throughput (tok/s): 1589.37
---------------Time to First Token----------------
Mean TTFT (ms): 96566.51
Median TTFT (ms): 84883.05
P99 TTFT (ms): 245639.66
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 366.31
Median TPOT (ms): 371.88
P99 TPOT (ms): 680.49
---------------Inter-token Latency----------------
Mean ITL (ms): 311.69
Median ITL (ms): 507.96
P99 ITL (ms): 741.83
================================================== The total token throughput improved by about 8%. Note: There is another sequential for loop which can be fixed similarly. My test case doesn't hit this control path, though. @fabianlim could you comment? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Mar 13, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cyang49 force-pushed the pr_mamba2_optimizations branch
2 times, most recently
from c33319f to 7fe5d58 Compare March 13, 2025 19:24 tlrmchlsmth reviewed Mar 13, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice performance pickup. Is this the other sequential for loop you mentioned? vllm/vllm/model_executor/layers/mamba/mamba_mixer2.py Lines 470 to 472
in 02fcaa3 for idx in mamba_cache_params . state_indices_tensor [ ~ has_initial_states ]: mamba_cache_params . ssm_state [ idx ]. zero_ () Do you want to handle it in this PR? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/layers/mamba/mamba_mixer2.py Comment on lines +502 to +510 batched_copy = torch.vmap( lambda idx, source_state: mamba_cache_params.ssm_state[ idx].copy_(source_state)) Copy link Collaborator tlrmchlsmth Mar 13, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This might be handy to have as a method of MambaCacheParams in mamba_cache.py Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 fabianlim reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Contributor Author cyang49 Mar 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment @tlrmchlsmth could you clarify if you mean to have this logic as a member function of MambaCacheParams ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator tlrmchlsmth Mar 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment That's right, that's what I meant, although I don's see a way to factor out commonality between batched_copy and batched_zero_init_func so I'm not sure it would clean anything up. # Note: the lambda capture can happen where ssm_state is initialized
# instead of here Is there some overhead that we should try to avoid here? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author cyang49 Mar 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The current lambda capture code is safe. The comment is just theorizing about removing redundancy. I don't know this part well enough yet. Attempting to "optimize" may introduce bugs. I'd leave it as is for now. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor Author cyang49 commented Mar 13, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Nice performance pickup. Is this the other sequential for loop you mentioned? vllm/vllm/model_executor/layers/mamba/mamba_mixer2.py Lines 470 to 472
in 02fcaa3 for idx in mamba_cache_params . state_indices_tensor [ ~ has_initial_states ]: mamba_cache_params . ssm_state [ idx ]. zero_ () Do you want to handle it in this PR? I need @fabianlim 's input on how to hit that case. It can be a separate PR or if I know how to test it tomorrow. Next week I'll be traveling and may not have time to do it All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor fabianlim commented Mar 14, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . since the mamba2 unit tests are not automated maybe we should run them once ? @tlrmchlsmth @cyang49 this will be true if, at least one of the sequences in the current step has an initial state, which is determined by the prescence of a context. This means that either a i) chunked prefill step or ii) decode step will hit this case. has_initial_states = attn_metadata.context_lens_tensor > 0 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 14, 2025 I vectorized the zero init loop and observed a slight improvement in total token throughput ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 247.81
Total input tokens: 215201
Total generated tokens: 198343
Request throughput (req/s): 4.04
Output token throughput (tok/s): 800.37
Total Token throughput (tok/s): 1668.76
---------------Time to First Token----------------
Mean TTFT (ms): 97128.43
Median TTFT (ms): 89290.52
P99 TTFT (ms): 233402.22
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 360.62
Median TPOT (ms): 355.58
P99 TPOT (ms): 992.48
---------------Inter-token Latency----------------
Mean ITL (ms): 294.58
Median ITL (ms): 503.07
P99 ITL (ms): 570.28
================================================== 👍 1 fabianlim reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 14, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . @tlrmchlsmth it would be nice if we can merge this one soon, if the functionality & no negative performance impact are verified. I noticed from the trace that there are other inefficiencies in mamba2, but I'll submit a separate PR after my trip. Let me know if there's anything else that needs changing. Thanks! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth approved these changes Mar 14, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions tlrmchlsmth added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 14, 2025 cyang49 added 4 commits March 14, 2025 13:03 vectorize copy loop for speedup … e0883a3 Signed-off-by: Chih-Chieh-Yang <[email protected]> replace any with torch.any to reduce overhead … 81488ad Signed-off-by: Chih-Chieh-Yang <[email protected]> lint … 86ca9b5 Signed-off-by: Chih-Chieh-Yang <[email protected]> Vectorize zero init of ssm_state … 0142ba3 Signed-off-by: Chih-Chieh-Yang <[email protected]> cyang49 force-pushed the pr_mamba2_optimizations branch
from 9584558 to 0142ba3 Compare March 14, 2025 17:11 Copy link Member DarkLight1337 commented Mar 14, 2025 Some CI failures have recently been fixed on main, so I suggest you to merge from main if you haven't already 👍 1 cyang49 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details tlrmchlsmth merged commit fe66b34 into vllm-project : main Mar 14, 2025 31 checks passed Uh oh! There was an error while loading. Please reload this page . cyang49 deleted the pr_mamba2_optimizations branch March 14, 2025 21:24 Copy link Contributor yury-tokpanov commented Mar 14, 2025 Testing this. We did notice the same in our profiles of mamba2. Overall, occupancy was pretty low in comparison to flash attention kernels. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth added a commit
that referenced
this pull request Mar 15, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… … 9baec50 …nnecessary Memory Copies ( #14778 )"
This reverts commit fe66b34 . yury-tokpanov added a commit
to Zyphra/vllm
that referenced
this pull request Mar 15, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… … efb7f02 …nnecessary Memory Copies ( vllm-project#14778 )"
This reverts commit fe66b34 . tlrmchlsmth mentioned this pull request Mar 15, 2025 Revert "[Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of U… #14848 Merged Copy link Contributor yury-tokpanov commented Mar 15, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . this breaks mamba2 based models, unfortunately: Command I ran on H100: lm_eval --model vllm --model_args pretrained=ibm-ai-platform/Bamba-9B,dtype=float16,gpu_memory_utilization=0.9,max_model_len=4096 --batch_size auto --trust_remote_code --cache_requests true --tasks gsm8k bamba-9b with this PR: Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.0781 ± 0.0074 strict-match 5 exact_match ↑ 0.0569 ± 0.0064 bamba-9b with PR reverted: Tasks Version Filter n-shot Metric Value Stderr gsm8k 3 flexible-extract 5 exact_match ↑ 0.2449 ± 0.0118 strict-match 5 exact_match ↑ 0.3692 ± 0.0133 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cyang49 restored the pr_mamba2_optimizations branch March 15, 2025 01:47 Copy link Contributor Author cyang49 commented Mar 15, 2025 Weird, it passed when I tested locally? Both value and stderr should be 0s? |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0|± | 0|
| | |strict-match | 5|exact_match|↑ | 0|± | 0| All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor yury-tokpanov commented Mar 15, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Weird, it passed when I tested locally? Both value and stderr should be 0s? |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0|± | 0|
| | |strict-match | 5|exact_match|↑ | 0|± | 0| No, it shouldn't be 0 accuracy. 0 means the model failed completely on a test. For the full gsm8k eval Bamba-9b should be around 37% on a strict-match accuracy (with around 1% stderr). I checked other mamba2 models (Codestral-7B, Zamba2), they are also down. Do you have Slack? I'd suggest you join vLLM dev Slack, we have a channel there to discuss hybrid models: https://slack.vllm.ai/ 👀 1 cyang49 reacted with eyes emoji All reactions 👀 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 15, 2025 Weird, it passed when I tested locally? Both value and stderr should be 0s? |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0|± | 0|
| | |strict-match | 5|exact_match|↑ | 0|± | 0| No, it shouldn't be 9 accuracy. 0 means the model failed completely on a test. For the full gsm8k eval Bamba-9b should be around 37% on a strict-match accuracy (with around 1% stderr). I checked other mamba2 models (Codestral-7B, Zamba2), they are also down. Do you have Slack? I'd suggest you join vLLM dev Slack, we have a channel there to discuss hybrid models: https://slack.vllm.ai/ Ah, thanks for explaining. I'll debug it when I get a chance. I'll also get on the vllm slack 👍 1 yury-tokpanov reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor yury-tokpanov commented Mar 15, 2025 I think the issue is with ssm state copy, zero-initialization appears to be working fine. 👍 1 cyang49 reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 15, 2025 I think the issue is with ssm state copy, zero-initialization appears to be working fine. It could also be that lm-eval doesn't go through the zero-init path, though All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Contributor Author cyang49 commented Mar 15, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . It appears the problem is that the semantics of the line mamba_cache_params.ssm_state[idx].copy_(varlen_state[i]) in the for loop is different from mamba_cache_params.ssm_state[idx].copy_(source_state) in the lambda function :( In the former, idx is a scalar integer value and the in-place copy happens, but in the latter, idx is an integer tensor and the indexing semantics is different. I suspect that the in-place copy doesn't happen as expected - I experimented with these two cases in the python interpreter.. It looks like the in-place zero_() part should have the same issue. Not sure why it didn't cause a problem for gsm8k All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . cyang49 mentioned this pull request Mar 15, 2025 [Model] RE: Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnecessary Memory Copies #14857 Merged Copy link Contributor fabianlim commented Mar 15, 2025 @cyang49 when idx is a tensor is a copy-view, so thats why the inplace does not update the master copy. That is why i needed to loop it with a scalar in the first place. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnecessa… … b5a740f …ry Memory Copies ( vllm-project#14778 )
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnecessa… … 7f3f2fc …ry Memory Copies ( vllm-project#14778 )
Signed-off-by: Chih-Chieh-Yang <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Model] Mamba2 Prefill Performance Tweaks: Fixing Flurry of Unnecessa… … fa2cba1 …ry Memory Copies ( vllm-project#14778 )
Signed-off-by: Chih-Chieh-Yang <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:55
|
70b808fe1a63322bc6bf5f46a91981a8f6b8af00
|
https://github.com/vllm-project/vllm/pull/14377
| false | true | false | true |
PERF: QPS, QPS, optimization | TEST: Test, test, test
|
Copy link Contributor cynthieye commented Mar 6, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . qwen2-vl logic optimization: During each forward propagation, the xformer branch of Qwen2VisionTransformer will execute multiple tensor tolist methods (flash attn branch will execute multiple tensor items) to force the GPU tensor to be copied to the CPU, triggering CUDAMemcpyAsync to increase time consumption. Since the input and output are the same multiple times, it will be executed once, and the remaining will reuse the first result. After optimization, the online environment xformer branch QPS can be improved by 15%, and the flash attn branch QPS can be improved by 7% Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 2 imkero and ywang96 reacted with thumbs up emoji All reactions 👍 2 reactions DarkLight1337 requested review from Isotr0py and ywang96 March 7, 2025 06:41 Isotr0py approved these changes Mar 7, 2025 View reviewed changes Copy link Member Isotr0py left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thanks for this optimization! Can you please also update qwen2.5-vl as well? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ❤️ 1 cynthieye reacted with heart emoji All reactions ❤️ 1 reaction vllm/model_executor/models/qwen2_vl.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/model_executor/models/qwen2_vl.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . cynthieye changed the title feat:Optimize qwen2-vl to reduce cudaMemcpyAsync [Perf]:Optimize qwen2-vl to reduce cudaMemcpyAsync Mar 10, 2025 cynthieye force-pushed the main branch
3 times, most recently
from ae09649 to 1fbb69c Compare March 10, 2025 06:53 Isotr0py enabled auto-merge (squash) March 10, 2025 09:50 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 10, 2025 auto-merge was automatically disabled March 10, 2025 13:26 Head branch was pushed to by a user without write access cynthieye force-pushed the main branch
from a4d7e3a to 37e543a Compare March 10, 2025 13:26 [Perf]: Optimize qwen2-vl to reduce cudaMemcpyAsync … 347de39 Signed-off-by: cynthieye <[email protected]> cynthieye force-pushed the main branch
from 37e543a to 347de39 Compare March 10, 2025 13:29 cynthieye mentioned this pull request Mar 10, 2025 [CI failed]: V1 Test Failed due to "No available memory for the cache blocks" in GitHub Actions #14574 Closed 1 task empty test … fd105c1 Signed-off-by: cynthieye <[email protected]> Copy link Member ywang96 commented Mar 11, 2025 @cynthieye Thank you for making this PR! Can you update this branch with our main branch? I think thr CI error should be fixed on main a while ago. ❤️ 1 cynthieye reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ywang96 approved these changes Mar 11, 2025 View reviewed changes Copy link Member ywang96 left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Left a few comments - Otherwise LGTM! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/qwen2_5_vl.py Outdated @@ -259,6 +259,8 @@ def forward( x: torch.Tensor, cu_seqlens: torch.Tensor, rotary_pos_emb: torch.Tensor, max_seqlen: int = None, Copy link Member ywang96 Mar 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Shouldn't max_seqlen be also Optional[int] ? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/qwen2_5_vl.py Outdated Comment on lines 372 to 373 max_seqlen: int, seqlens: list[int], Copy link Member ywang96 Mar 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Please modify the typing accordingly Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/qwen2_vl.py Outdated Comment on lines 310 to 311 max_seqlen: int = None, seqlens: Optional[list[int]] = None, Copy link Member ywang96 Mar 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment ditto Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/qwen2_vl.py Outdated Comment on lines 417 to 418 max_seqlen: int, seqlens: list[int], Copy link Member ywang96 Mar 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment ditto Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/model_executor/models/qwen2_5_vl.py Outdated Comment on lines 372 to 373 max_seqlen: int, seqlens: list[int], Copy link Member ywang96 Mar 11, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment I think it's probably a good idea to add a small documentation here to indicate that max_seqlen is only used for FA and seqlens is only used to xformers. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions cynthieye added 3 commits March 11, 2025 13:12 [Perf]: Fix formatting issues … 9959792 Signed-off-by: cynthieye <[email protected]> Merge remote-tracking branch 'upstream/main' c03f59d [Perf]: Fix formatting issues … ddb8dd3 Signed-off-by: cynthieye <[email protected]> ywang96 enabled auto-merge (squash) March 11, 2025 06:25 Hide details View details ywang96 merged commit 70b808f into vllm-project : main Mar 11, 2025 33 checks passed Uh oh! There was an error while loading. Please reload this page . This was referenced Mar 20, 2025 [Bugfix] Fix incorrect qwen2.5-vl attention mask pre-computation #15200 Merged [Misc] Add attention mask pre-computation optimization back to Qwen2.5-VL #15273 Merged lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Perf]:Optimize qwen2-vl to reduce cudaMemcpyAsync ( vllm-project#14377 ) … d468e24 Signed-off-by: cynthieye <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Perf]:Optimize qwen2-vl to reduce cudaMemcpyAsync ( vllm-project#14377 ) … 8ece569 Signed-off-by: cynthieye <[email protected]> RichardoMrMu pushed a commit
to RichardoMrMu/vllm
that referenced
this pull request May 12, 2025 [Perf]:Optimize qwen2-vl to reduce cudaMemcpyAsync ( vllm-project#14377 ) … 21ac3af Signed-off-by: cynthieye <[email protected]>
Signed-off-by: Mu Huai <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:51:59
|
fb0acb6c72874e98617cabee4ff4851569374fc9
|
https://github.com/vllm-project/vllm/pull/14540
| true | true | false | true |
LM_EVAL: lm_eval, lm_eval, GSM8K | PERF: Throughput, Throughput, Throughput | TEST: Test, Test, Test
|
Copy link Collaborator simon-mo commented Mar 10, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR helps V1 to mostly match and exceed (in most cases) V0's performance for MLA. Mostly by two things Fix @LucasWilkinson 's rotary_emb specialization ( [Perf] Reduce MLA CPU overheads in V1 #14384 , Revert "[Perf] Reduce MLA CPU overheads in V1 (#14384)" #14471 , [Bugfix] DeepSeek Accuracy #14476 ) to reduce CPU overhead. Identified that the cause of 0 GSM8K score comes from the cuda kernel needs the input to be continuous. Fixed it by make the input contiguous if possible. A better fix will be to change the kernel (help wanted). Reordered some operation in the build function, which ended up costing quite a bit overhead in my timing (p99 tail latency up to 1ms) This is by ensuring there is not GPU -> CPU communication. CPU -> GPU is fine. All the following ran in 8xH200. Performance Test (R1) We are still a bit worse on the short range but we became significantly better on longer range. 64% boost for 6k input. VLLM_USE_V1=1 python benchmarks/benchmark_throughput.py --model /home/vllm-dev/DeepSeek-R1 --load-format dummy --trust-remote-code --input-len 3000 --output-len 1000 --num-prompts 50 --tensor-parallel-size 8 Throughput: 1.09 requests/s, 4342.27 total tokens/s, 1085.57 output tokens/s VLLM_USE_V1=0 python benchmarks/benchmark_throughput.py --model /home/vllm-dev/DeepSeek-R1 --load-format dummy --trust-remote-code --input-len 3000 --output-len 1000 --num-prompts 50 --tensor-parallel-size 8 Throughput: 1.13 requests/s, 4536.67 total tokens/s, 1134.17 output tokens/s VLLM_USE_V1=1 python benchmarks/benchmark_throughput.py --model /home/vllm-dev/DeepSeek-R1 --load-format dummy --trust-remote-code --input-len 6000 --output-len 1000 --num-prompts 50 --tensor-parallel-size 8 Throughput: 0.87 requests/s, 6060.61 total tokens/s, 865.80 output tokens/s VLLM_USE_V1=0 python benchmarks/benchmark_throughput.py --model /home/vllm-dev/DeepSeek-R1 --load-format dummy --trust-remote-code --input-len 6000 --output-len 1000 --num-prompts 50 --tensor-parallel-size 8 Throughput: 0.53 requests/s, 3692.82 total tokens/s, 527.55 output tokens/s Performance Test (Small) We are 15% better for small model for 3k input. VLLM_USE_V1=1 python benchmarks/benchmark_throughput.py --model deepseek-ai/DeepSeek-V2-Lite --load-format dummy --trust-remote-code --input-len 3000 --output-len 1000 --num-prompts 50 Throughput: 3.84 requests/s, 15364.27 total tokens/s, 3841.07 output tokens/s VLLM_USE_V1=0 python benchmarks/benchmark_throughput.py --model deepseek-ai/DeepSeek-V2-Lite --load-format dummy --trust-remote-code --input-len 3000 --output-len 1000 --num-prompts 50 Throughput: 3.32 requests/s, 13275.67 total tokens/s, 3318.92 output tokens/s VLLM_USE_V1=0 python benchmarks/benchmark_throughput.py --model deepseek-ai/DeepSeek-V2-Lite --load-format dummy --trust-remote-code --input-len 3000 --output-len 1000 --num-prompts 50 --enable-chunked-prefill false Throughput: 3.32 requests/s, 13264.68 total tokens/s, 3316.17 output tokens/s Accuracy Test No regression. VLLM_USE_V1="1" lm_eval --model vllm --model_args pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384 --task gsm8k --num_fewshot=5 --limit 100 --log_samples --output_path lmeval-results
vllm (pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384), gen_kwargs: (None), limit: 100.0, num_fewshot: 5, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.65|± |0.0479|
| | |strict-match | 5|exact_match|↑ | 0.64|± |0.0482|
VLLM_USE_V1="0" lm_eval --model vllm --model_args pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384 --task gsm8k --num_fewshot=5 --limit 100 --log_samples --output_path lmeval-results
vllm (pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384), gen_kwargs: (None), limit: 100.0, num_fewshot: 5, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.66|± |0.0476|
| | |strict-match | 5|exact_match|↑ | 0.66|± |0.0476| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions [Perf] Improve MLA on V1 … e3c00a1 Signed-off-by: simon-mo <[email protected]> simon-mo requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners March 10, 2025 05:50 Copy link github-actions bot commented Mar 10, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Mar 10, 2025 simon-mo requested a review
from LucasWilkinson March 10, 2025 05:51 simon-mo added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 10, 2025 fix lint … 8cf800f Signed-off-by: simon-mo <[email protected]> tlrmchlsmth approved these changes Mar 10, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LucasWilkinson approved these changes Mar 10, 2025 View reviewed changes Copy link Collaborator LucasWilkinson left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM left 1 nit. Thanks for working on this! (sorry this fell on your plate) good catch on number 2! my bad for not catching this! I was wondering if it would be better compute on the CPU in V1 but didn't really keep pushing on that, ill try to be more careful about reviewing CPU->GPU transfers in the future Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/attention/backends/mla/common.py Outdated decode_q_pe_input = (decode_q_pe.clone().contiguous() if not decode_q_pe.is_contiguous() else decode_q_pe) Copy link Collaborator LucasWilkinson Mar 10, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: do we need clone here? my understanding is .continuous() will implicitly do a clone if its not contiguous and no-op if it already is: >>> x1 = torch.rand((4,4))
>>> x2 = x1.t()
>>> x1.is_contiguous()
True
>>> x2.is_contiguous()
False
>>> x1.data_ptr()
94306274798528
>>> x1.contiguous().data_ptr()
94306274798528
>>> x2.data_ptr()
94306274798528
>>> x2.contiguous().data_ptr()
94306363886080 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator LucasWilkinson Mar 10, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment i.e. I think we can drop this line and just do: decode_q_pe[...], decode_k_pe[...] = self.rotary_emb(
attn_metadata.decode.input_positions, decode_q_pe.contiguous(),
decode_k_pe) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author simon-mo Mar 10, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Yup great point and i verified the perf. clone was a left over from previous debugging but your solution is great! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions simpler code from lucas … f8c28a4 Signed-off-by: simon-mo <[email protected]> simon-mo enabled auto-merge (squash) March 10, 2025 16:13 simon-mo disabled auto-merge March 10, 2025 19:06 Hide details View details simon-mo merged commit fb0acb6 into vllm-project : main Mar 10, 2025 29 of 31 checks passed Uh oh! There was an error while loading. Please reload this page . LucasWilkinson mentioned this pull request Mar 11, 2025 [Bugfix] DeepSeek Accuracy #14476 Merged Copy link Contributor ZhongYingMatrix commented Mar 13, 2025 hi @simon-mo Thx for ur great work! Speaking of D2H operation, I notice that has_context on here would be a single element bool tensor, which incur H2D in following condition operation. Would it has an impact on performance? cc @LucasWilkinson All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author simon-mo commented Mar 13, 2025 good find. Fix welcomed! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . hmellor mentioned this pull request Apr 2, 2025 [Performance]: 0.8.1 vs 0.7.4dev122 R1 H20 performance benchmark test,0.8.1 What is the reason for the 14% performance improvement(throughput tokens/s) #15881 Closed 1 task lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Perf] Improve MLA on V1 ( vllm-project#14540 ) … 8e41390 Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Perf] Improve MLA on V1 ( vllm-project#14540 ) … ba35e3b Signed-off-by: simon-mo <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:03
|
ca7a2d5f28eac9621474563cdda0e08596222755
|
https://github.com/vllm-project/vllm/pull/14471
| true | true | true | true |
LM_EVAL: lm_eval, gsm8k, gsm8k | PERF: throughput, improvement, improvement | SERVING: vllm serve, serve, Frontend | TEST: test, test, test
|
Copy link Collaborator tlrmchlsmth commented Mar 8, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Running VLLM_USE_V1=1 vllm serve deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct --tensor_parallel_size=2 --port 8192 --trust-remote-code and then lm_eval --model local-completions --tasks gsm8k --model_args model=deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct,base_url=http://127.0.0.1:8192/v1/completions,num_concurrent=5,max_retries=3,tokenized_requests=False --limit 100 On current main we see: |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.06|± |0.0239|
| | |strict-match | 5|exact_match|↑ | 0.00|± |0.0000| This PR: |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ | 0.77|± |0.0423|
| | |strict-match | 5|exact_match|↑ | 0.77|± |0.0423| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Revert "[Perf] Reduce MLA CPU overheads in V1 ( #14384 )" … c671cd9 This reverts commit dae6896 . tlrmchlsmth requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners March 8, 2025 03:13 Copy link github-actions bot commented Mar 8, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth mentioned this pull request Mar 8, 2025 [Bugfix][V1] Handle MLA in kv_cache_interface #14462 Merged mergify bot added
the v1 label Mar 8, 2025 simon-mo approved these changes Mar 8, 2025 View reviewed changes Hide details View details simon-mo merged commit ca7a2d5 into main Mar 8, 2025 21 of 23 checks passed Uh oh! There was an error while loading. Please reload this page . simon-mo deleted the revert_rope_mla_bug branch March 8, 2025 06:18 simon-mo added a commit
to simon-mo/vllm
that referenced
this pull request Mar 9, 2025 Revert "Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#1… … ef04b8d …4384 )" ( vllm-project#14471 )"
This reverts commit ca7a2d5 .
Signed-off-by: simon-mo <[email protected]> simon-mo mentioned this pull request Mar 10, 2025 [Perf] Improve MLA on V1 #14540 Merged Alexei-V-Ivanov-AMD added a commit
to ROCm/vllm
that referenced
this pull request Mar 11, 2025 Merging in the latest merge from vllm-project to ROCm ( #472 ) … a699a11 * Fix `head_dim` not existing in all model configs (Transformers backend) ( vllm-project#14141 )
Signed-off-by: Harry Mellor <[email protected]>
* [V0][Metrics] Remove unimplemented `vllm:tokens_total` ( vllm-project#14134 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V0][Metrics] Deprecate some KV/prefix cache metrics ( vllm-project#14136 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1] Simplify stats logging ( vllm-project#14082 )
Signed-off-by: Nick Hill <[email protected]>
* [WIP][[V1][Metrics] Implement max_num_generation_tokens, request_params_n, and request_params_max_tokens metrics ( vllm-project#14055 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 ( vllm-project#14100 )
Signed-off-by: mgoin <[email protected]>
* [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add GPTQModel ( vllm-project#14056 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [v1] Add comments to the new ragged paged attention Pallas kernel ( vllm-project#14155 )
Signed-off-by: Xiongfei Wei <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Model] Add support for GraniteMoeShared models ( vllm-project#13313 )
Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [core] moe fp8 block quant tuning support ( vllm-project#14068 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Remove lru_cache in NvmlCudaPlatform ( vllm-project#14156 )
Signed-off-by: Cody Yu <[email protected]>
* [core] Pass all driver env vars to ray workers unless excluded ( vllm-project#14099 )
Signed-off-by: Rui Qiao <[email protected]>
* Use math.prod instead of np.prod for trivial ops ( vllm-project#14142 )
* Fix benchmark_moe.py tuning for CUDA devices ( vllm-project#14164 )
* [platform] add debug logging during inferring the device type ( vllm-project#14195 )
Signed-off-by: youkaichao <[email protected]>
* [sleep mode] error out with expandable_segments ( vllm-project#14189 )
Signed-off-by: youkaichao <[email protected]>
* [doc] add "Failed to infer device type" to faq ( vllm-project#14200 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Restrict MacOS CPU detection ( vllm-project#14210 )
Signed-off-by: mgoin <[email protected]>
* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs ( vllm-project#13869 )
Signed-off-by: Nick Hill <[email protected]>
* [V0][Metrics] Deprecate some questionable request time metrics ( vllm-project#14135 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py ( vllm-project#14161 )
* add cutlass support for blackwell fp8 gemm ( vllm-project#13798 )
* [TPU][Profiler] Support start_profile/stop_profile in TPU worker ( vllm-project#13988 )
Signed-off-by: Siyuan Liu <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Fix performance when `--generation-config` is not `None` ( vllm-project#14223 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions ( vllm-project#14225 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Update Dockerfile dependency image ( vllm-project#14215 )
Signed-off-by: mgoin <[email protected]>
* [v1][Metrics] Add design doc ( vllm-project#12745 )
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe ( vllm-project#14228 )
Signed-off-by: KuntaiDu <[email protected]>
* Clean up unused padding_idx variables across many model definitions ( vllm-project#13240 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [ROCm] Disable a few more kernel tests that are broken on ROCm ( vllm-project#14145 )
Signed-off-by: Sage Moore <[email protected]>
* [V1][TPU] TPU multimodal model support for ragged attention ( vllm-project#14158 )
Signed-off-by: Michael Goin <[email protected]>
* [misc] announce china meetup ( vllm-project#14248 )
Signed-off-by: youkaichao <[email protected]>
* Moved numba from common requirements to cuda/rocm specific requirements ( vllm-project#14199 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 ( vllm-project#14157 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Fix gptq_marlin for deepseek-v3 ( vllm-project#13750 )
Signed-off-by: dangshunya <[email protected]>
Co-authored-by: dangshunya <[email protected]>
* [V1][Bugfix] Do not reset prefix caching metrics ( vllm-project#14235 )
* [Model] New model support for Phi-4-multimodal-instruct ( vllm-project#14119 )
* [V1] EP/TP MoE + DP Attention ( vllm-project#13931 )
* [platforms] improve rocm debugging info ( vllm-project#14257 )
* Temporarily disable test_awq_gemm_opcheck ( vllm-project#14251 )
Signed-off-by: mgoin <[email protected]>
* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param ( vllm-project#14066 )
Signed-off-by: Benjamin Chislett <[email protected]>
* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing ( vllm-project#14256 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler ( vllm-project#14169 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID ( vllm-project#14217 )
Signed-off-by: Iacopo Poli <[email protected]>
* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor ( vllm-project#14278 )
Signed-off-by: DarkLight1337 <[email protected]>
* Small update for external_launcher backend docs ( vllm-project#14288 )
* [V1][Frontend] Add Testing For V1 Runtime Parameters ( vllm-project#14159 )
Signed-off-by: [email protected] <[email protected]>
* [LoRA] Remove linear hack outside transformers backend ( vllm-project#14177 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Add Qwen2MoeForCausalLM moe tuning support ( vllm-project#14276 )
Signed-off-by: Jee Jee Li <[email protected]>
* prefix_caching.md: Fixed typo ( vllm-project#14293 )
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
* [Bugfix] Fix broken vision language example ( vllm-project#14292 )
Signed-off-by: Isotr0py <[email protected]>
* [Docs] Add Meta Slides ( vllm-project#14297 )
Signed-off-by: simon-mo <[email protected]>
* [V1][Minor] Remove obsolete FIXME comment ( vllm-project#14304 )
Signed-off-by: Nick Hill <[email protected]>
* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 ( vllm-project#13997 )
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [V1][BugFix] Fix for mixed top_k batch ( vllm-project#14301 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env ( vllm-project#14267 )
* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test ( vllm-project#14308 )
Signed-off-by: Lu Fang <[email protected]>
* init
Signed-off-by: Sage Moore <[email protected]>
* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch ( vllm-project#14237 )
Signed-off-by: pyc96 <[email protected]>
* [Bugfix] Remove num_tokens_across_dp ( vllm-project#14302 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [BugFix] Fix prefix caching V0 MLA ( vllm-project#14255 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline ( vllm-project#14243 )
Signed-off-by: Russell Bryant <[email protected]>
* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM ( vllm-project#13917 )
Signed-off-by: mgoin <[email protected]>
* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation ( vllm-project#13850 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] MLA + V1, illegal memory access and accuracy issues ( vllm-project#14253 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Mention `ray list nodes` command to troubleshoot ray issues ( vllm-project#14318 )
Signed-off-by: Rui Qiao <[email protected]>
* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 ( vllm-project#14114 )
* [V1] LoRA - Enable more V1 tests ( vllm-project#14315 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention ( vllm-project#11301 )
* [Hardware] Update the flash attn tag to support Blackwell ( vllm-project#14244 )
* [Model] Update Paligemma multimodal processing with PromptUpdate ( vllm-project#14015 )
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 ( vllm-project#14275 )
Signed-off-by: Linkun Chen <[email protected]>
* [Core] Optimizing cross-attention `QKVParallelLinear` computation ( vllm-project#12325 )
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Co-authored-by: NickLucche <[email protected]>
* [Frontend][Docs] Transcription API streaming ( vllm-project#13301 )
Signed-off-by: NickLucche <[email protected]>
* [Doc] Update reasoning with stream example to use OpenAI library ( vllm-project#14077 )
Signed-off-by: liuyanyi <[email protected]>
* [Doc] Correct beam_search using in generative_models.md ( vllm-project#14363 )
* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend ( vllm-project#14152 )
* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 ( vllm-project#14326 )
Signed-off-by: courage17340 <[email protected]>
* [Core] Don't use cache during multi-modal profiling ( vllm-project#14336 )
* [Doc] Fix date typo in README.md ( vllm-project#14366 )
Signed-off-by: Jitse Klomp <[email protected]>
* [RLHF] use worker_extension_cls for compatibility with V0 and V1 ( vllm-project#14185 )
Signed-off-by: youkaichao <[email protected]>
* Reinstate `best_of` for V0 ( vllm-project#14356 )
Signed-off-by: Harry Mellor <[email protected]>
* Adding cpu inference with VXE ISA for s390x architecture ( vllm-project#12613 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
* Add authors to license header. ( vllm-project#14371 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
* Fix mla prefill context performance ( vllm-project#13897 )
Signed-off-by: ZhongYingMatrix <[email protected]>
* [V1] Do not detokenize if sampling param detokenize is False ( vllm-project#14224 )
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Distributed] Add enable_expert_parallel arg ( vllm-project#14305 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa ( vllm-project#13569 )
Signed-off-by: mgoin <[email protected]>
* [CI] Disable spawn when running V1 Test ( vllm-project#14345 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Kernel] Add needs_fixed_stride_order tag to most GEMMs ( vllm-project#14306 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bugfix] Fix use_direct_call condition in FusedMoE layer for ( vllm-project#14382 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bug] Fix Attention when ignored in by quant_method ( vllm-project#14313 )
Signed-off-by: mgoin <[email protected]>
* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends ( vllm-project#14221 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add nsight guide to profiling docs ( vllm-project#14298 )
Signed-off-by: mgoin <[email protected]>
* cleanup boolean logic
Signed-off-by: Sage Moore <[email protected]>
* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue ( vllm-project#14310 )
Signed-off-by: Chengji Yao <[email protected]>
* [Doc] Fix a typo ( vllm-project#14385 )
* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script ( vllm-project#14183 )
Signed-off-by: Brayden Zhong <[email protected]>
* [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object ( vllm-project#14390 )
Signed-off-by: luka <[email protected]>
* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs ( vllm-project#14396 )
* [Bugfix] Fix JambaForCausalLM LoRA ( vllm-project#14370 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Build] Add nightly wheel fallback when latest commit wheel unavailable ( vllm-project#14358 )
Signed-off-by: Isotr0py <[email protected]>
* OpenVINO: added CPU-like conditions ( vllm-project#14338 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [GH] Auto-apply multi-modality label to relevant PRs ( vllm-project#14402 )
Signed-off-by: DarkLight1337 <[email protected]>
* correct wrong markdown syntax ( vllm-project#14414 )
Signed-off-by: vincent-pli <[email protected]>
* [Bugfix] Further clean up LoRA test ( vllm-project#14422 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Clean up multi-modal processors ( vllm-project#14417 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Set default value of seed to None ( vllm-project#14274 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [BUGFIX] Skip tokenization support for throughput benchmark ( vllm-project#12712 )
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` ( vllm-project#14271 )
Signed-off-by: Harry Mellor <[email protected]>
* Use the optimized block sizes after tuning the kernel. ( vllm-project#14329 )
* [V1][Core] Support for Structured Outputs ( vllm-project#12388 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update prefix_caching.md to match the example image ( vllm-project#14420 )
* [Benchmarks] Make detokenization optional in benchmark scripts ( vllm-project#11697 )
Signed-off-by: Jeremy Arnold <[email protected]>
* comments
Signed-off-by: Sage Moore <[email protected]>
* [Kernel] optimize performance of gptq marlin kernel when n is small ( vllm-project#14138 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [Misc] Add Phi4-MM example ( vllm-project#14343 )
Signed-off-by: Jee Jee Li <[email protected]>
* [v1] torch.compile integration explanation ( vllm-project#14437 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Eagerly remove finished requests from the batch ( vllm-project#14388 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][Metrics] Fix traceback with preemptions+LoRA ( vllm-project#14220 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 ( vllm-project#14459 )
Signed-off-by: Yarong Mu <[email protected]>
* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC ( vllm-project#13949 )
* [Bugfix][V1] Handle MLA in kv_cache_interface ( vllm-project#14462 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( vllm-project#14471 )
* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache ( vllm-project#14369 )
Signed-off-by: Mathis Felardos <[email protected]>
* [MISC][V1] Register process killing handler only in the main thread ( vllm-project#14380 )
Signed-off-by: Cody Yu <[email protected]>
* [core] add `extra_args` to `SamplingParams` ( vllm-project#13300 )
Signed-off-by: Aviv Keshet <[email protected]>
* [CI/Build] refactor: set timezone of container to UTC ( vllm-project#12888 )
Signed-off-by: Roger Meier <[email protected]>
* Default to `generation_config` from model ( vllm-project#12622 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc]add doc for Qwen models tool calling ( vllm-project#14478 )
Signed-off-by: WangErXiao <[email protected]>
* [Doc] Added QwQ-32B to the supported models list in the reasoning out… ( vllm-project#14479 )
Signed-off-by: WangErXiao <[email protected]>
* [Bugfix] Make the deviceprofiler include LoRA memory. ( vllm-project#14469 )
Signed-off-by: Jee Jee Li <[email protected]>
* Add training doc signposting to TRL ( vllm-project#14439 )
Signed-off-by: Harry Mellor <[email protected]>
* [Build/BugFix] Fix hopper 12.8 build ( vllm-project#14354 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add RLHF document ( vllm-project#14482 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Use a fixed seed to avoid flaky tests ( vllm-project#14480 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] TPU - Add tensor parallel support via Ray ( vllm-project#13618 )
Signed-off-by: Alexander Matveev <[email protected]>
* [VLM] Add TP support for Phi-4-MM ( vllm-project#14453 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] add `use_tqdm_on_load` to reduce logs ( vllm-project#14407 )
Signed-off-by: Aaron Pham <[email protected]>
* [V1][Core] Fix memory issue with logits & sampling ( vllm-project#13776 )
Signed-off-by: Roger Wang <[email protected]>
* [benchmarks] Add option to use unique jsonschema for each request ( vllm-project#14457 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Don't run ruff at all on 3rd party libs ( vllm-project#14493 )
Signed-off-by: DarkLight1337 <[email protected]>
* Move requirements into their own directory ( vllm-project#12547 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] DeepSeek Accuracy ( vllm-project#14476 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling ( vllm-project#14361 )
Signed-off-by: Isotr0py <[email protected]>
* Update CODEOWNERS for structured output ( vllm-project#14496 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Upgrade to Python 3.9 typing for additional directories ( vllm-project#14492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support bad_words in sampler ( vllm-project#13376 )
Signed-off-by: 22quinn <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* Revert "[V1][Core] Fix memory issue with logits & sampling" ( vllm-project#14504 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Attention] Default to FlashMLA backend for MLA ( vllm-project#14451 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [V1][TPU] Remove unnecessary padding for running on TPU. ( vllm-project#14467 )
* [Feat] Support chunked prefill for LMCache connector ( vllm-project#14505 )
Signed-off-by: YaoJiayi <[email protected]>
* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 ( vllm-project#12428 )
Signed-off-by: Yuchen Yan <[email protected]>
* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work ( vllm-project#14498 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup ( vllm-project#14510 )
Signed-off-by: Chengji Yao <[email protected]>
* [Misc] Ensure out-of-tree quantization method recognize by cli args ( vllm-project#14328 )
Signed-off-by: liuyanyi <[email protected]>
* [Bugfix] Wrong requirements path - rocm ( vllm-project#14527 )
Signed-off-by: Martin Hoyer <[email protected]>
* [Feature] Consolidate performance benchmark datasets ( vllm-project#14036 )
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add log information for handle_process_request. ( vllm-project#14130 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Docs] Mention `model_impl` arg when explaining Transformers fallback ( vllm-project#14552 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] support image embeds ( vllm-project#13955 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Kernel] Add more dtype support for GGUF kernels ( vllm-project#14043 )
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
* [Doc] Update PaliGemma note to a warning ( vllm-project#14565 )
Signed-off-by: DarkLight1337 <[email protected]>
* V1 rocm support ( #469 )
* Initial commit for V1 successfull compilation
* Small improvement for linear
* Small improvement for linear
* making use of forward_cuda for all except ROPE in llama
---------
Co-authored-by: maleksan85 <[email protected]>
* nightly_fixed_aiter_integration_final_20250305 README update ( #470 )
* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)
* Update Docker Manifest git hash
* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305
* some more updates
* Update AITER section with example
* Updated AITER command with larger batch size and model name
* Fixing typo
* Removed --max-model-len in AITER command
* Updating AITER instructions
* typo
* Another typo
* Whitespace
* modifying whats new section
* Another typo
---------
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Xiongfei Wei <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: dangshunya <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Iacopo Poli <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: pyc96 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: liuyanyi <[email protected]>
Signed-off-by: courage17340 <[email protected]>
Signed-off-by: Jitse Klomp <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Chengji Yao <[email protected]>
Signed-off-by: luka <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: vincent-pli <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Jeremy Arnold <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Yarong Mu <[email protected]>
Signed-off-by: Mathis Felardos <[email protected]>
Signed-off-by: Aviv Keshet <[email protected]>
Signed-off-by: Roger Meier <[email protected]>
Signed-off-by: WangErXiao <[email protected]>
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Yuchen Yan <[email protected]>
Signed-off-by: Martin Hoyer <[email protected]>
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: iefgnoix <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Zhanwen Chen <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: kushanam <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: rainkert <[email protected]>
Co-authored-by: dangshunya <[email protected]>
Co-authored-by: Congcong Chen <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Iacopo Poli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: DaividFrank <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Vincent <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
Co-authored-by: Serena <[email protected]>
Co-authored-by: pyc96 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: kYLe <[email protected]>
Co-authored-by: NickLucche <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Irina Yuryeva <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: courage17340 <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
Co-authored-by: Himanshu Jaju <[email protected]>
Co-authored-by: Chengji Yao <[email protected]>
Co-authored-by: Daniel Li <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Peng Li <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: York-RDWang <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: yarongmu-google <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Mathis Felardos <[email protected]>
Co-authored-by: Aviv Keshet <[email protected]>
Co-authored-by: Roger Meier <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: 22quinn <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Yuchen Yan <[email protected]>
Co-authored-by: Martin Hoyer <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Mcirino1 <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]> captainzmc pushed a commit
to captainzmc/vllm
that referenced
this pull request Mar 12, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … f08a8d3 …llm-project#14471 ) LucasWilkinson mentioned this pull request Mar 13, 2025 [Attention] Remove slow setattr in MLA #14769 Merged lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … 0492d83 …llm-project#14471 )
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … 7e10bb8 …llm-project#14471 ) Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:06
|
dae68969774e41b93b01cd31171ca033a92b574a
|
https://github.com/vllm-project/vllm/pull/14384
| false | true | true | true |
PERF: throughput, throughput, req/s | SERVING: Frontend, Frontend, Frontend | TEST: test, test, test
|
Copy link Collaborator LucasWilkinson commented Mar 6, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Some temporary hacks to reduce CPU overheads in MLA caused by rotary embeddings (not in torch.compile, or a cuda-graph) Main This PR Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions reduce cpu overheads … 6e7928c Signed-off-by: Lucas Wilkinson <[email protected]> LucasWilkinson requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners March 6, 2025 21:38 Copy link github-actions bot commented Mar 6, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Mar 6, 2025 add a todo … 0f6abfb Signed-off-by: Lucas Wilkinson <[email protected]> mgoin approved these changes Mar 6, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This is unfortunately an easy footgun to trigger, nice find. cc @WoosukKwon Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added ready ONLY add when PR is ready to merge/full CI is needed performance Performance-related issues labels Mar 6, 2025 WoosukKwon requested changes Mar 6, 2025 View reviewed changes vllm/model_executor/layers/rotary_embedding.py Comment on lines -164 to -165 self.cos_sin_cache = self.cos_sin_cache.to(query.device, dtype=query.dtype) Copy link Collaborator WoosukKwon Mar 6, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Do we actually know what this line of code is for? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author LucasWilkinson Mar 6, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment no :/ it doesnt appear to be called, but just didn't want to create behavior change in case there was a model that needs it. I can pull it out completely and we can just see if we get reports of breakages Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/attention/backends/mla/common.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author LucasWilkinson commented Mar 6, 2025 8xH200, DeepSeek-R1, VLLM_USE_V1=1 VLLM_ATTENTION_BACKEND=FLASHMLA VLLM_USE_FLASHINFER_SAMPLER=1
Main:
backend input_tokens output_tokens output_toks/s req/s median_itl_ms median_ttft_ms
2 vllm 1000 1000 1095.323697 1.095324 40.931626 149.658605
1 vllm 5000 1000 517.327850 0.517328 39.956240 5627.535715
3 vllm 10000 1000 315.639817 0.315640 39.697455 57821.907031
0 vllm 32000 1000 106.821047 0.106821 40.109005 193232.262791
This PR:
backend input_tokens output_tokens output_toks/s req/s median_itl_ms median_ttft_ms
2 vllm 1000 1000 1326.682856 1.326683 29.775325 2541.827728
1 vllm 5000 1000 644.308764 0.644309 32.297487 5495.584260
3 vllm 10000 1000 387.664650 0.387665 31.273896 49202.113080
0 vllm 32000 1000 127.601311 0.127601 31.530342 166538.112456 👍 1 WoosukKwon reacted with thumbs up emoji 👀 2 mgoin and MichoChan reacted with eyes emoji All reactions 👍 1 reaction 👀 2 reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon reviewed Mar 6, 2025 View reviewed changes vllm/model_executor/layers/rotary_embedding.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . review comments + cleanup … 4e1ef0d Signed-off-by: Lucas Wilkinson <[email protected]> WoosukKwon approved these changes Mar 7, 2025 View reviewed changes tlrmchlsmth approved these changes Mar 7, 2025 View reviewed changes Hide details View details vllm-bot merged commit dae6896 into vllm-project : main Mar 7, 2025 33 of 35 checks passed Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth added a commit
that referenced
this pull request Mar 8, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( #14384 )" … c671cd9 This reverts commit dae6896 . LucasWilkinson mentioned this pull request Mar 8, 2025 [Bugfix] DeepSeek Accuracy #14476 Merged simon-mo pushed a commit
that referenced
this pull request Mar 8, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( #14384 )" ( #14471 ) ca7a2d5 simon-mo added a commit
to simon-mo/vllm
that referenced
this pull request Mar 9, 2025 Revert "Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#1… … ef04b8d …4384 )" ( vllm-project#14471 )"
This reverts commit ca7a2d5 .
Signed-off-by: simon-mo <[email protected]> simon-mo mentioned this pull request Mar 10, 2025 [Perf] Improve MLA on V1 #14540 Merged Alexei-V-Ivanov-AMD added a commit
to ROCm/vllm
that referenced
this pull request Mar 11, 2025 Merging in the latest merge from vllm-project to ROCm ( #472 ) … a699a11 * Fix `head_dim` not existing in all model configs (Transformers backend) ( vllm-project#14141 )
Signed-off-by: Harry Mellor <[email protected]>
* [V0][Metrics] Remove unimplemented `vllm:tokens_total` ( vllm-project#14134 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V0][Metrics] Deprecate some KV/prefix cache metrics ( vllm-project#14136 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1] Simplify stats logging ( vllm-project#14082 )
Signed-off-by: Nick Hill <[email protected]>
* [WIP][[V1][Metrics] Implement max_num_generation_tokens, request_params_n, and request_params_max_tokens metrics ( vllm-project#14055 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 ( vllm-project#14100 )
Signed-off-by: mgoin <[email protected]>
* [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add GPTQModel ( vllm-project#14056 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [v1] Add comments to the new ragged paged attention Pallas kernel ( vllm-project#14155 )
Signed-off-by: Xiongfei Wei <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Model] Add support for GraniteMoeShared models ( vllm-project#13313 )
Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [core] moe fp8 block quant tuning support ( vllm-project#14068 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Remove lru_cache in NvmlCudaPlatform ( vllm-project#14156 )
Signed-off-by: Cody Yu <[email protected]>
* [core] Pass all driver env vars to ray workers unless excluded ( vllm-project#14099 )
Signed-off-by: Rui Qiao <[email protected]>
* Use math.prod instead of np.prod for trivial ops ( vllm-project#14142 )
* Fix benchmark_moe.py tuning for CUDA devices ( vllm-project#14164 )
* [platform] add debug logging during inferring the device type ( vllm-project#14195 )
Signed-off-by: youkaichao <[email protected]>
* [sleep mode] error out with expandable_segments ( vllm-project#14189 )
Signed-off-by: youkaichao <[email protected]>
* [doc] add "Failed to infer device type" to faq ( vllm-project#14200 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Restrict MacOS CPU detection ( vllm-project#14210 )
Signed-off-by: mgoin <[email protected]>
* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs ( vllm-project#13869 )
Signed-off-by: Nick Hill <[email protected]>
* [V0][Metrics] Deprecate some questionable request time metrics ( vllm-project#14135 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py ( vllm-project#14161 )
* add cutlass support for blackwell fp8 gemm ( vllm-project#13798 )
* [TPU][Profiler] Support start_profile/stop_profile in TPU worker ( vllm-project#13988 )
Signed-off-by: Siyuan Liu <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Fix performance when `--generation-config` is not `None` ( vllm-project#14223 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions ( vllm-project#14225 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Update Dockerfile dependency image ( vllm-project#14215 )
Signed-off-by: mgoin <[email protected]>
* [v1][Metrics] Add design doc ( vllm-project#12745 )
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe ( vllm-project#14228 )
Signed-off-by: KuntaiDu <[email protected]>
* Clean up unused padding_idx variables across many model definitions ( vllm-project#13240 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [ROCm] Disable a few more kernel tests that are broken on ROCm ( vllm-project#14145 )
Signed-off-by: Sage Moore <[email protected]>
* [V1][TPU] TPU multimodal model support for ragged attention ( vllm-project#14158 )
Signed-off-by: Michael Goin <[email protected]>
* [misc] announce china meetup ( vllm-project#14248 )
Signed-off-by: youkaichao <[email protected]>
* Moved numba from common requirements to cuda/rocm specific requirements ( vllm-project#14199 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 ( vllm-project#14157 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Fix gptq_marlin for deepseek-v3 ( vllm-project#13750 )
Signed-off-by: dangshunya <[email protected]>
Co-authored-by: dangshunya <[email protected]>
* [V1][Bugfix] Do not reset prefix caching metrics ( vllm-project#14235 )
* [Model] New model support for Phi-4-multimodal-instruct ( vllm-project#14119 )
* [V1] EP/TP MoE + DP Attention ( vllm-project#13931 )
* [platforms] improve rocm debugging info ( vllm-project#14257 )
* Temporarily disable test_awq_gemm_opcheck ( vllm-project#14251 )
Signed-off-by: mgoin <[email protected]>
* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param ( vllm-project#14066 )
Signed-off-by: Benjamin Chislett <[email protected]>
* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing ( vllm-project#14256 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler ( vllm-project#14169 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID ( vllm-project#14217 )
Signed-off-by: Iacopo Poli <[email protected]>
* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor ( vllm-project#14278 )
Signed-off-by: DarkLight1337 <[email protected]>
* Small update for external_launcher backend docs ( vllm-project#14288 )
* [V1][Frontend] Add Testing For V1 Runtime Parameters ( vllm-project#14159 )
Signed-off-by: [email protected] <[email protected]>
* [LoRA] Remove linear hack outside transformers backend ( vllm-project#14177 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Add Qwen2MoeForCausalLM moe tuning support ( vllm-project#14276 )
Signed-off-by: Jee Jee Li <[email protected]>
* prefix_caching.md: Fixed typo ( vllm-project#14293 )
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
* [Bugfix] Fix broken vision language example ( vllm-project#14292 )
Signed-off-by: Isotr0py <[email protected]>
* [Docs] Add Meta Slides ( vllm-project#14297 )
Signed-off-by: simon-mo <[email protected]>
* [V1][Minor] Remove obsolete FIXME comment ( vllm-project#14304 )
Signed-off-by: Nick Hill <[email protected]>
* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 ( vllm-project#13997 )
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [V1][BugFix] Fix for mixed top_k batch ( vllm-project#14301 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env ( vllm-project#14267 )
* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test ( vllm-project#14308 )
Signed-off-by: Lu Fang <[email protected]>
* init
Signed-off-by: Sage Moore <[email protected]>
* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch ( vllm-project#14237 )
Signed-off-by: pyc96 <[email protected]>
* [Bugfix] Remove num_tokens_across_dp ( vllm-project#14302 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [BugFix] Fix prefix caching V0 MLA ( vllm-project#14255 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline ( vllm-project#14243 )
Signed-off-by: Russell Bryant <[email protected]>
* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM ( vllm-project#13917 )
Signed-off-by: mgoin <[email protected]>
* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation ( vllm-project#13850 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] MLA + V1, illegal memory access and accuracy issues ( vllm-project#14253 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Mention `ray list nodes` command to troubleshoot ray issues ( vllm-project#14318 )
Signed-off-by: Rui Qiao <[email protected]>
* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 ( vllm-project#14114 )
* [V1] LoRA - Enable more V1 tests ( vllm-project#14315 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention ( vllm-project#11301 )
* [Hardware] Update the flash attn tag to support Blackwell ( vllm-project#14244 )
* [Model] Update Paligemma multimodal processing with PromptUpdate ( vllm-project#14015 )
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 ( vllm-project#14275 )
Signed-off-by: Linkun Chen <[email protected]>
* [Core] Optimizing cross-attention `QKVParallelLinear` computation ( vllm-project#12325 )
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Co-authored-by: NickLucche <[email protected]>
* [Frontend][Docs] Transcription API streaming ( vllm-project#13301 )
Signed-off-by: NickLucche <[email protected]>
* [Doc] Update reasoning with stream example to use OpenAI library ( vllm-project#14077 )
Signed-off-by: liuyanyi <[email protected]>
* [Doc] Correct beam_search using in generative_models.md ( vllm-project#14363 )
* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend ( vllm-project#14152 )
* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 ( vllm-project#14326 )
Signed-off-by: courage17340 <[email protected]>
* [Core] Don't use cache during multi-modal profiling ( vllm-project#14336 )
* [Doc] Fix date typo in README.md ( vllm-project#14366 )
Signed-off-by: Jitse Klomp <[email protected]>
* [RLHF] use worker_extension_cls for compatibility with V0 and V1 ( vllm-project#14185 )
Signed-off-by: youkaichao <[email protected]>
* Reinstate `best_of` for V0 ( vllm-project#14356 )
Signed-off-by: Harry Mellor <[email protected]>
* Adding cpu inference with VXE ISA for s390x architecture ( vllm-project#12613 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
* Add authors to license header. ( vllm-project#14371 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
* Fix mla prefill context performance ( vllm-project#13897 )
Signed-off-by: ZhongYingMatrix <[email protected]>
* [V1] Do not detokenize if sampling param detokenize is False ( vllm-project#14224 )
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Distributed] Add enable_expert_parallel arg ( vllm-project#14305 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa ( vllm-project#13569 )
Signed-off-by: mgoin <[email protected]>
* [CI] Disable spawn when running V1 Test ( vllm-project#14345 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Kernel] Add needs_fixed_stride_order tag to most GEMMs ( vllm-project#14306 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bugfix] Fix use_direct_call condition in FusedMoE layer for ( vllm-project#14382 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bug] Fix Attention when ignored in by quant_method ( vllm-project#14313 )
Signed-off-by: mgoin <[email protected]>
* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends ( vllm-project#14221 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add nsight guide to profiling docs ( vllm-project#14298 )
Signed-off-by: mgoin <[email protected]>
* cleanup boolean logic
Signed-off-by: Sage Moore <[email protected]>
* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue ( vllm-project#14310 )
Signed-off-by: Chengji Yao <[email protected]>
* [Doc] Fix a typo ( vllm-project#14385 )
* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script ( vllm-project#14183 )
Signed-off-by: Brayden Zhong <[email protected]>
* [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object ( vllm-project#14390 )
Signed-off-by: luka <[email protected]>
* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs ( vllm-project#14396 )
* [Bugfix] Fix JambaForCausalLM LoRA ( vllm-project#14370 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Build] Add nightly wheel fallback when latest commit wheel unavailable ( vllm-project#14358 )
Signed-off-by: Isotr0py <[email protected]>
* OpenVINO: added CPU-like conditions ( vllm-project#14338 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [GH] Auto-apply multi-modality label to relevant PRs ( vllm-project#14402 )
Signed-off-by: DarkLight1337 <[email protected]>
* correct wrong markdown syntax ( vllm-project#14414 )
Signed-off-by: vincent-pli <[email protected]>
* [Bugfix] Further clean up LoRA test ( vllm-project#14422 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Clean up multi-modal processors ( vllm-project#14417 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Set default value of seed to None ( vllm-project#14274 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [BUGFIX] Skip tokenization support for throughput benchmark ( vllm-project#12712 )
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` ( vllm-project#14271 )
Signed-off-by: Harry Mellor <[email protected]>
* Use the optimized block sizes after tuning the kernel. ( vllm-project#14329 )
* [V1][Core] Support for Structured Outputs ( vllm-project#12388 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update prefix_caching.md to match the example image ( vllm-project#14420 )
* [Benchmarks] Make detokenization optional in benchmark scripts ( vllm-project#11697 )
Signed-off-by: Jeremy Arnold <[email protected]>
* comments
Signed-off-by: Sage Moore <[email protected]>
* [Kernel] optimize performance of gptq marlin kernel when n is small ( vllm-project#14138 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [Misc] Add Phi4-MM example ( vllm-project#14343 )
Signed-off-by: Jee Jee Li <[email protected]>
* [v1] torch.compile integration explanation ( vllm-project#14437 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Eagerly remove finished requests from the batch ( vllm-project#14388 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][Metrics] Fix traceback with preemptions+LoRA ( vllm-project#14220 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 ( vllm-project#14459 )
Signed-off-by: Yarong Mu <[email protected]>
* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC ( vllm-project#13949 )
* [Bugfix][V1] Handle MLA in kv_cache_interface ( vllm-project#14462 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( vllm-project#14471 )
* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache ( vllm-project#14369 )
Signed-off-by: Mathis Felardos <[email protected]>
* [MISC][V1] Register process killing handler only in the main thread ( vllm-project#14380 )
Signed-off-by: Cody Yu <[email protected]>
* [core] add `extra_args` to `SamplingParams` ( vllm-project#13300 )
Signed-off-by: Aviv Keshet <[email protected]>
* [CI/Build] refactor: set timezone of container to UTC ( vllm-project#12888 )
Signed-off-by: Roger Meier <[email protected]>
* Default to `generation_config` from model ( vllm-project#12622 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc]add doc for Qwen models tool calling ( vllm-project#14478 )
Signed-off-by: WangErXiao <[email protected]>
* [Doc] Added QwQ-32B to the supported models list in the reasoning out… ( vllm-project#14479 )
Signed-off-by: WangErXiao <[email protected]>
* [Bugfix] Make the deviceprofiler include LoRA memory. ( vllm-project#14469 )
Signed-off-by: Jee Jee Li <[email protected]>
* Add training doc signposting to TRL ( vllm-project#14439 )
Signed-off-by: Harry Mellor <[email protected]>
* [Build/BugFix] Fix hopper 12.8 build ( vllm-project#14354 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add RLHF document ( vllm-project#14482 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Use a fixed seed to avoid flaky tests ( vllm-project#14480 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] TPU - Add tensor parallel support via Ray ( vllm-project#13618 )
Signed-off-by: Alexander Matveev <[email protected]>
* [VLM] Add TP support for Phi-4-MM ( vllm-project#14453 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] add `use_tqdm_on_load` to reduce logs ( vllm-project#14407 )
Signed-off-by: Aaron Pham <[email protected]>
* [V1][Core] Fix memory issue with logits & sampling ( vllm-project#13776 )
Signed-off-by: Roger Wang <[email protected]>
* [benchmarks] Add option to use unique jsonschema for each request ( vllm-project#14457 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Don't run ruff at all on 3rd party libs ( vllm-project#14493 )
Signed-off-by: DarkLight1337 <[email protected]>
* Move requirements into their own directory ( vllm-project#12547 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] DeepSeek Accuracy ( vllm-project#14476 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling ( vllm-project#14361 )
Signed-off-by: Isotr0py <[email protected]>
* Update CODEOWNERS for structured output ( vllm-project#14496 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Upgrade to Python 3.9 typing for additional directories ( vllm-project#14492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support bad_words in sampler ( vllm-project#13376 )
Signed-off-by: 22quinn <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* Revert "[V1][Core] Fix memory issue with logits & sampling" ( vllm-project#14504 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Attention] Default to FlashMLA backend for MLA ( vllm-project#14451 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [V1][TPU] Remove unnecessary padding for running on TPU. ( vllm-project#14467 )
* [Feat] Support chunked prefill for LMCache connector ( vllm-project#14505 )
Signed-off-by: YaoJiayi <[email protected]>
* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 ( vllm-project#12428 )
Signed-off-by: Yuchen Yan <[email protected]>
* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work ( vllm-project#14498 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup ( vllm-project#14510 )
Signed-off-by: Chengji Yao <[email protected]>
* [Misc] Ensure out-of-tree quantization method recognize by cli args ( vllm-project#14328 )
Signed-off-by: liuyanyi <[email protected]>
* [Bugfix] Wrong requirements path - rocm ( vllm-project#14527 )
Signed-off-by: Martin Hoyer <[email protected]>
* [Feature] Consolidate performance benchmark datasets ( vllm-project#14036 )
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add log information for handle_process_request. ( vllm-project#14130 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Docs] Mention `model_impl` arg when explaining Transformers fallback ( vllm-project#14552 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] support image embeds ( vllm-project#13955 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Kernel] Add more dtype support for GGUF kernels ( vllm-project#14043 )
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
* [Doc] Update PaliGemma note to a warning ( vllm-project#14565 )
Signed-off-by: DarkLight1337 <[email protected]>
* V1 rocm support ( #469 )
* Initial commit for V1 successfull compilation
* Small improvement for linear
* Small improvement for linear
* making use of forward_cuda for all except ROPE in llama
---------
Co-authored-by: maleksan85 <[email protected]>
* nightly_fixed_aiter_integration_final_20250305 README update ( #470 )
* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)
* Update Docker Manifest git hash
* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305
* some more updates
* Update AITER section with example
* Updated AITER command with larger batch size and model name
* Fixing typo
* Removed --max-model-len in AITER command
* Updating AITER instructions
* typo
* Another typo
* Whitespace
* modifying whats new section
* Another typo
---------
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Xiongfei Wei <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: dangshunya <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Iacopo Poli <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: pyc96 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: liuyanyi <[email protected]>
Signed-off-by: courage17340 <[email protected]>
Signed-off-by: Jitse Klomp <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Chengji Yao <[email protected]>
Signed-off-by: luka <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: vincent-pli <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Jeremy Arnold <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Yarong Mu <[email protected]>
Signed-off-by: Mathis Felardos <[email protected]>
Signed-off-by: Aviv Keshet <[email protected]>
Signed-off-by: Roger Meier <[email protected]>
Signed-off-by: WangErXiao <[email protected]>
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Yuchen Yan <[email protected]>
Signed-off-by: Martin Hoyer <[email protected]>
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: iefgnoix <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Zhanwen Chen <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: kushanam <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: rainkert <[email protected]>
Co-authored-by: dangshunya <[email protected]>
Co-authored-by: Congcong Chen <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Iacopo Poli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: DaividFrank <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Vincent <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
Co-authored-by: Serena <[email protected]>
Co-authored-by: pyc96 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: kYLe <[email protected]>
Co-authored-by: NickLucche <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Irina Yuryeva <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: courage17340 <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
Co-authored-by: Himanshu Jaju <[email protected]>
Co-authored-by: Chengji Yao <[email protected]>
Co-authored-by: Daniel Li <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Peng Li <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: York-RDWang <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: yarongmu-google <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Mathis Felardos <[email protected]>
Co-authored-by: Aviv Keshet <[email protected]>
Co-authored-by: Roger Meier <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: 22quinn <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Yuchen Yan <[email protected]>
Co-authored-by: Martin Hoyer <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Mcirino1 <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]> captainzmc pushed a commit
to captainzmc/vllm
that referenced
this pull request Mar 12, 2025 [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 ) … 7e6ed97 Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> captainzmc pushed a commit
to captainzmc/vllm
that referenced
this pull request Mar 12, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … f08a8d3 …llm-project#14471 ) LucasWilkinson mentioned this pull request Mar 13, 2025 [Attention] Remove slow setattr in MLA #14769 Merged hmellor mentioned this pull request Apr 2, 2025 [Performance]: 0.8.1 vs 0.7.4dev122 R1 H20 performance benchmark test,0.8.1 What is the reason for the 14% performance improvement(throughput tokens/s) #15881 Closed 1 task lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 ) … c1c2455 Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … 0492d83 …llm-project#14471 )
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 ) … d407380 Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( v… … 7e10bb8 …llm-project#14471 ) Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:10
|
9f1710f1ace3535920c0bb6d4cc329c36289080e
|
https://github.com/vllm-project/vllm/pull/13897
| false | true | true | true |
PERF: throughput, improvement, improvement | SERVING: Frontend, Frontend, Frontend | TEST: test, test, test
|
Copy link Contributor ZhongYingMatrix commented Feb 26, 2025 kv_c_normed unsqeezed leads to the following kv_b_proj slowed down. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Feb 26, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . ZhongYingMatrix force-pushed the fix_mla_prefill_context branch
from 6d182b2 to 6dbd7d6 Compare February 26, 2025 13:21 Fix mla prefill context performance … 6aa754e Signed-off-by: ZhongYingMatrix <[email protected]> ZhongYingMatrix force-pushed the fix_mla_prefill_context branch
from 6dbd7d6 to 6aa754e Compare March 6, 2025 09:03 ZhongYingMatrix requested review from WoosukKwon , robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners March 6, 2025 09:03 mergify bot added
the v1 label Mar 6, 2025 Copy link Contributor Author ZhongYingMatrix commented Mar 6, 2025 @LucasWilkinson Hi, would u please review this PR? Some shape printed In forward
k_c_normed.shape: torch.Size([2048, 512])
k_pe.shape: torch.Size([2048, 1, 64])
In _forward_prefill
q.shape: torch.Size([2048, 16, 192])
kv_c_normed.shape: torch.Size([2048, 512])
k_pe.shape: torch.Size([2048, 1, 64])
In _compute_prefill_context
kv_c_normed.shape: torch.Size([2048, 1, 512]) # wrongly batched matrix-vector mul
k_pe.shape: torch.Size([2048, 1, 64]) time compare on DeepSeek-V2-Lite-Chat with 28k input_len and 64 output_len. before
first_token=6.392493963241577, total=7.078949689865112
after
first_token=1.7816479206085205, total=2.4884746074676514 👍 1 neiltian-tencent reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . LucasWilkinson approved these changes Mar 6, 2025 View reviewed changes Copy link Collaborator LucasWilkinson left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Nice find! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions LucasWilkinson enabled auto-merge (squash) March 6, 2025 10:21 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 6, 2025 auto-merge was automatically disabled March 6, 2025 11:36 Head branch was pushed to by a user without write access ZhongYingMatrix force-pushed the fix_mla_prefill_context branch
2 times, most recently
from 434cbae to 6aa754e Compare March 6, 2025 11:45 LucasWilkinson enabled auto-merge (squash) March 6, 2025 11:59 Copy link Contributor Author ZhongYingMatrix commented Mar 6, 2025 @LucasWilkinson Hi, any clue of failed checks? I suppose the minor changes do not affect the tests. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator LucasWilkinson commented Mar 6, 2025 @LucasWilkinson Hi, any clue of failed checks? I suppose the minor changes do not affect the tests. The CI can be flaky, retrying. If that doesnt work we can ask for a force merge 👍 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . simon-mo disabled auto-merge March 6, 2025 17:35 Hide details View details simon-mo merged commit 9f1710f into vllm-project : main Mar 6, 2025 52 of 54 checks passed Uh oh! There was an error while loading. Please reload this page . Alexei-V-Ivanov-AMD added a commit
to ROCm/vllm
that referenced
this pull request Mar 11, 2025 Merging in the latest merge from vllm-project to ROCm ( #472 ) … a699a11 * Fix `head_dim` not existing in all model configs (Transformers backend) ( vllm-project#14141 )
Signed-off-by: Harry Mellor <[email protected]>
* [V0][Metrics] Remove unimplemented `vllm:tokens_total` ( vllm-project#14134 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V0][Metrics] Deprecate some KV/prefix cache metrics ( vllm-project#14136 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1] Simplify stats logging ( vllm-project#14082 )
Signed-off-by: Nick Hill <[email protected]>
* [WIP][[V1][Metrics] Implement max_num_generation_tokens, request_params_n, and request_params_max_tokens metrics ( vllm-project#14055 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 ( vllm-project#14100 )
Signed-off-by: mgoin <[email protected]>
* [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add GPTQModel ( vllm-project#14056 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [v1] Add comments to the new ragged paged attention Pallas kernel ( vllm-project#14155 )
Signed-off-by: Xiongfei Wei <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Model] Add support for GraniteMoeShared models ( vllm-project#13313 )
Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [core] moe fp8 block quant tuning support ( vllm-project#14068 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Remove lru_cache in NvmlCudaPlatform ( vllm-project#14156 )
Signed-off-by: Cody Yu <[email protected]>
* [core] Pass all driver env vars to ray workers unless excluded ( vllm-project#14099 )
Signed-off-by: Rui Qiao <[email protected]>
* Use math.prod instead of np.prod for trivial ops ( vllm-project#14142 )
* Fix benchmark_moe.py tuning for CUDA devices ( vllm-project#14164 )
* [platform] add debug logging during inferring the device type ( vllm-project#14195 )
Signed-off-by: youkaichao <[email protected]>
* [sleep mode] error out with expandable_segments ( vllm-project#14189 )
Signed-off-by: youkaichao <[email protected]>
* [doc] add "Failed to infer device type" to faq ( vllm-project#14200 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Restrict MacOS CPU detection ( vllm-project#14210 )
Signed-off-by: mgoin <[email protected]>
* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs ( vllm-project#13869 )
Signed-off-by: Nick Hill <[email protected]>
* [V0][Metrics] Deprecate some questionable request time metrics ( vllm-project#14135 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py ( vllm-project#14161 )
* add cutlass support for blackwell fp8 gemm ( vllm-project#13798 )
* [TPU][Profiler] Support start_profile/stop_profile in TPU worker ( vllm-project#13988 )
Signed-off-by: Siyuan Liu <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Fix performance when `--generation-config` is not `None` ( vllm-project#14223 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions ( vllm-project#14225 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Update Dockerfile dependency image ( vllm-project#14215 )
Signed-off-by: mgoin <[email protected]>
* [v1][Metrics] Add design doc ( vllm-project#12745 )
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe ( vllm-project#14228 )
Signed-off-by: KuntaiDu <[email protected]>
* Clean up unused padding_idx variables across many model definitions ( vllm-project#13240 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [ROCm] Disable a few more kernel tests that are broken on ROCm ( vllm-project#14145 )
Signed-off-by: Sage Moore <[email protected]>
* [V1][TPU] TPU multimodal model support for ragged attention ( vllm-project#14158 )
Signed-off-by: Michael Goin <[email protected]>
* [misc] announce china meetup ( vllm-project#14248 )
Signed-off-by: youkaichao <[email protected]>
* Moved numba from common requirements to cuda/rocm specific requirements ( vllm-project#14199 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 ( vllm-project#14157 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Fix gptq_marlin for deepseek-v3 ( vllm-project#13750 )
Signed-off-by: dangshunya <[email protected]>
Co-authored-by: dangshunya <[email protected]>
* [V1][Bugfix] Do not reset prefix caching metrics ( vllm-project#14235 )
* [Model] New model support for Phi-4-multimodal-instruct ( vllm-project#14119 )
* [V1] EP/TP MoE + DP Attention ( vllm-project#13931 )
* [platforms] improve rocm debugging info ( vllm-project#14257 )
* Temporarily disable test_awq_gemm_opcheck ( vllm-project#14251 )
Signed-off-by: mgoin <[email protected]>
* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param ( vllm-project#14066 )
Signed-off-by: Benjamin Chislett <[email protected]>
* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing ( vllm-project#14256 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler ( vllm-project#14169 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID ( vllm-project#14217 )
Signed-off-by: Iacopo Poli <[email protected]>
* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor ( vllm-project#14278 )
Signed-off-by: DarkLight1337 <[email protected]>
* Small update for external_launcher backend docs ( vllm-project#14288 )
* [V1][Frontend] Add Testing For V1 Runtime Parameters ( vllm-project#14159 )
Signed-off-by: [email protected] <[email protected]>
* [LoRA] Remove linear hack outside transformers backend ( vllm-project#14177 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Add Qwen2MoeForCausalLM moe tuning support ( vllm-project#14276 )
Signed-off-by: Jee Jee Li <[email protected]>
* prefix_caching.md: Fixed typo ( vllm-project#14293 )
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
* [Bugfix] Fix broken vision language example ( vllm-project#14292 )
Signed-off-by: Isotr0py <[email protected]>
* [Docs] Add Meta Slides ( vllm-project#14297 )
Signed-off-by: simon-mo <[email protected]>
* [V1][Minor] Remove obsolete FIXME comment ( vllm-project#14304 )
Signed-off-by: Nick Hill <[email protected]>
* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 ( vllm-project#13997 )
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [V1][BugFix] Fix for mixed top_k batch ( vllm-project#14301 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env ( vllm-project#14267 )
* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test ( vllm-project#14308 )
Signed-off-by: Lu Fang <[email protected]>
* init
Signed-off-by: Sage Moore <[email protected]>
* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch ( vllm-project#14237 )
Signed-off-by: pyc96 <[email protected]>
* [Bugfix] Remove num_tokens_across_dp ( vllm-project#14302 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [BugFix] Fix prefix caching V0 MLA ( vllm-project#14255 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline ( vllm-project#14243 )
Signed-off-by: Russell Bryant <[email protected]>
* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM ( vllm-project#13917 )
Signed-off-by: mgoin <[email protected]>
* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation ( vllm-project#13850 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] MLA + V1, illegal memory access and accuracy issues ( vllm-project#14253 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Mention `ray list nodes` command to troubleshoot ray issues ( vllm-project#14318 )
Signed-off-by: Rui Qiao <[email protected]>
* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 ( vllm-project#14114 )
* [V1] LoRA - Enable more V1 tests ( vllm-project#14315 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention ( vllm-project#11301 )
* [Hardware] Update the flash attn tag to support Blackwell ( vllm-project#14244 )
* [Model] Update Paligemma multimodal processing with PromptUpdate ( vllm-project#14015 )
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 ( vllm-project#14275 )
Signed-off-by: Linkun Chen <[email protected]>
* [Core] Optimizing cross-attention `QKVParallelLinear` computation ( vllm-project#12325 )
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Co-authored-by: NickLucche <[email protected]>
* [Frontend][Docs] Transcription API streaming ( vllm-project#13301 )
Signed-off-by: NickLucche <[email protected]>
* [Doc] Update reasoning with stream example to use OpenAI library ( vllm-project#14077 )
Signed-off-by: liuyanyi <[email protected]>
* [Doc] Correct beam_search using in generative_models.md ( vllm-project#14363 )
* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend ( vllm-project#14152 )
* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 ( vllm-project#14326 )
Signed-off-by: courage17340 <[email protected]>
* [Core] Don't use cache during multi-modal profiling ( vllm-project#14336 )
* [Doc] Fix date typo in README.md ( vllm-project#14366 )
Signed-off-by: Jitse Klomp <[email protected]>
* [RLHF] use worker_extension_cls for compatibility with V0 and V1 ( vllm-project#14185 )
Signed-off-by: youkaichao <[email protected]>
* Reinstate `best_of` for V0 ( vllm-project#14356 )
Signed-off-by: Harry Mellor <[email protected]>
* Adding cpu inference with VXE ISA for s390x architecture ( vllm-project#12613 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
* Add authors to license header. ( vllm-project#14371 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
* Fix mla prefill context performance ( vllm-project#13897 )
Signed-off-by: ZhongYingMatrix <[email protected]>
* [V1] Do not detokenize if sampling param detokenize is False ( vllm-project#14224 )
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Distributed] Add enable_expert_parallel arg ( vllm-project#14305 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa ( vllm-project#13569 )
Signed-off-by: mgoin <[email protected]>
* [CI] Disable spawn when running V1 Test ( vllm-project#14345 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Kernel] Add needs_fixed_stride_order tag to most GEMMs ( vllm-project#14306 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bugfix] Fix use_direct_call condition in FusedMoE layer for ( vllm-project#14382 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bug] Fix Attention when ignored in by quant_method ( vllm-project#14313 )
Signed-off-by: mgoin <[email protected]>
* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends ( vllm-project#14221 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add nsight guide to profiling docs ( vllm-project#14298 )
Signed-off-by: mgoin <[email protected]>
* cleanup boolean logic
Signed-off-by: Sage Moore <[email protected]>
* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue ( vllm-project#14310 )
Signed-off-by: Chengji Yao <[email protected]>
* [Doc] Fix a typo ( vllm-project#14385 )
* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script ( vllm-project#14183 )
Signed-off-by: Brayden Zhong <[email protected]>
* [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object ( vllm-project#14390 )
Signed-off-by: luka <[email protected]>
* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs ( vllm-project#14396 )
* [Bugfix] Fix JambaForCausalLM LoRA ( vllm-project#14370 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Build] Add nightly wheel fallback when latest commit wheel unavailable ( vllm-project#14358 )
Signed-off-by: Isotr0py <[email protected]>
* OpenVINO: added CPU-like conditions ( vllm-project#14338 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [GH] Auto-apply multi-modality label to relevant PRs ( vllm-project#14402 )
Signed-off-by: DarkLight1337 <[email protected]>
* correct wrong markdown syntax ( vllm-project#14414 )
Signed-off-by: vincent-pli <[email protected]>
* [Bugfix] Further clean up LoRA test ( vllm-project#14422 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Clean up multi-modal processors ( vllm-project#14417 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Set default value of seed to None ( vllm-project#14274 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [BUGFIX] Skip tokenization support for throughput benchmark ( vllm-project#12712 )
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` ( vllm-project#14271 )
Signed-off-by: Harry Mellor <[email protected]>
* Use the optimized block sizes after tuning the kernel. ( vllm-project#14329 )
* [V1][Core] Support for Structured Outputs ( vllm-project#12388 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update prefix_caching.md to match the example image ( vllm-project#14420 )
* [Benchmarks] Make detokenization optional in benchmark scripts ( vllm-project#11697 )
Signed-off-by: Jeremy Arnold <[email protected]>
* comments
Signed-off-by: Sage Moore <[email protected]>
* [Kernel] optimize performance of gptq marlin kernel when n is small ( vllm-project#14138 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [Misc] Add Phi4-MM example ( vllm-project#14343 )
Signed-off-by: Jee Jee Li <[email protected]>
* [v1] torch.compile integration explanation ( vllm-project#14437 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Eagerly remove finished requests from the batch ( vllm-project#14388 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][Metrics] Fix traceback with preemptions+LoRA ( vllm-project#14220 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 ( vllm-project#14459 )
Signed-off-by: Yarong Mu <[email protected]>
* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC ( vllm-project#13949 )
* [Bugfix][V1] Handle MLA in kv_cache_interface ( vllm-project#14462 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( vllm-project#14471 )
* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache ( vllm-project#14369 )
Signed-off-by: Mathis Felardos <[email protected]>
* [MISC][V1] Register process killing handler only in the main thread ( vllm-project#14380 )
Signed-off-by: Cody Yu <[email protected]>
* [core] add `extra_args` to `SamplingParams` ( vllm-project#13300 )
Signed-off-by: Aviv Keshet <[email protected]>
* [CI/Build] refactor: set timezone of container to UTC ( vllm-project#12888 )
Signed-off-by: Roger Meier <[email protected]>
* Default to `generation_config` from model ( vllm-project#12622 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc]add doc for Qwen models tool calling ( vllm-project#14478 )
Signed-off-by: WangErXiao <[email protected]>
* [Doc] Added QwQ-32B to the supported models list in the reasoning out… ( vllm-project#14479 )
Signed-off-by: WangErXiao <[email protected]>
* [Bugfix] Make the deviceprofiler include LoRA memory. ( vllm-project#14469 )
Signed-off-by: Jee Jee Li <[email protected]>
* Add training doc signposting to TRL ( vllm-project#14439 )
Signed-off-by: Harry Mellor <[email protected]>
* [Build/BugFix] Fix hopper 12.8 build ( vllm-project#14354 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add RLHF document ( vllm-project#14482 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Use a fixed seed to avoid flaky tests ( vllm-project#14480 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] TPU - Add tensor parallel support via Ray ( vllm-project#13618 )
Signed-off-by: Alexander Matveev <[email protected]>
* [VLM] Add TP support for Phi-4-MM ( vllm-project#14453 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] add `use_tqdm_on_load` to reduce logs ( vllm-project#14407 )
Signed-off-by: Aaron Pham <[email protected]>
* [V1][Core] Fix memory issue with logits & sampling ( vllm-project#13776 )
Signed-off-by: Roger Wang <[email protected]>
* [benchmarks] Add option to use unique jsonschema for each request ( vllm-project#14457 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Don't run ruff at all on 3rd party libs ( vllm-project#14493 )
Signed-off-by: DarkLight1337 <[email protected]>
* Move requirements into their own directory ( vllm-project#12547 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] DeepSeek Accuracy ( vllm-project#14476 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling ( vllm-project#14361 )
Signed-off-by: Isotr0py <[email protected]>
* Update CODEOWNERS for structured output ( vllm-project#14496 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Upgrade to Python 3.9 typing for additional directories ( vllm-project#14492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support bad_words in sampler ( vllm-project#13376 )
Signed-off-by: 22quinn <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* Revert "[V1][Core] Fix memory issue with logits & sampling" ( vllm-project#14504 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Attention] Default to FlashMLA backend for MLA ( vllm-project#14451 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [V1][TPU] Remove unnecessary padding for running on TPU. ( vllm-project#14467 )
* [Feat] Support chunked prefill for LMCache connector ( vllm-project#14505 )
Signed-off-by: YaoJiayi <[email protected]>
* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 ( vllm-project#12428 )
Signed-off-by: Yuchen Yan <[email protected]>
* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work ( vllm-project#14498 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup ( vllm-project#14510 )
Signed-off-by: Chengji Yao <[email protected]>
* [Misc] Ensure out-of-tree quantization method recognize by cli args ( vllm-project#14328 )
Signed-off-by: liuyanyi <[email protected]>
* [Bugfix] Wrong requirements path - rocm ( vllm-project#14527 )
Signed-off-by: Martin Hoyer <[email protected]>
* [Feature] Consolidate performance benchmark datasets ( vllm-project#14036 )
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add log information for handle_process_request. ( vllm-project#14130 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Docs] Mention `model_impl` arg when explaining Transformers fallback ( vllm-project#14552 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] support image embeds ( vllm-project#13955 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Kernel] Add more dtype support for GGUF kernels ( vllm-project#14043 )
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
* [Doc] Update PaliGemma note to a warning ( vllm-project#14565 )
Signed-off-by: DarkLight1337 <[email protected]>
* V1 rocm support ( #469 )
* Initial commit for V1 successfull compilation
* Small improvement for linear
* Small improvement for linear
* making use of forward_cuda for all except ROPE in llama
---------
Co-authored-by: maleksan85 <[email protected]>
* nightly_fixed_aiter_integration_final_20250305 README update ( #470 )
* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)
* Update Docker Manifest git hash
* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305
* some more updates
* Update AITER section with example
* Updated AITER command with larger batch size and model name
* Fixing typo
* Removed --max-model-len in AITER command
* Updating AITER instructions
* typo
* Another typo
* Whitespace
* modifying whats new section
* Another typo
---------
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Xiongfei Wei <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: dangshunya <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Iacopo Poli <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: pyc96 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: liuyanyi <[email protected]>
Signed-off-by: courage17340 <[email protected]>
Signed-off-by: Jitse Klomp <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Chengji Yao <[email protected]>
Signed-off-by: luka <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: vincent-pli <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Jeremy Arnold <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Yarong Mu <[email protected]>
Signed-off-by: Mathis Felardos <[email protected]>
Signed-off-by: Aviv Keshet <[email protected]>
Signed-off-by: Roger Meier <[email protected]>
Signed-off-by: WangErXiao <[email protected]>
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Yuchen Yan <[email protected]>
Signed-off-by: Martin Hoyer <[email protected]>
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: iefgnoix <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Zhanwen Chen <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: kushanam <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: rainkert <[email protected]>
Co-authored-by: dangshunya <[email protected]>
Co-authored-by: Congcong Chen <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Iacopo Poli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: DaividFrank <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Vincent <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
Co-authored-by: Serena <[email protected]>
Co-authored-by: pyc96 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: kYLe <[email protected]>
Co-authored-by: NickLucche <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Irina Yuryeva <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: courage17340 <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
Co-authored-by: Himanshu Jaju <[email protected]>
Co-authored-by: Chengji Yao <[email protected]>
Co-authored-by: Daniel Li <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Peng Li <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: York-RDWang <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: yarongmu-google <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Mathis Felardos <[email protected]>
Co-authored-by: Aviv Keshet <[email protected]>
Co-authored-by: Roger Meier <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: 22quinn <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Yuchen Yan <[email protected]>
Co-authored-by: Martin Hoyer <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Mcirino1 <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]> captainzmc pushed a commit
to captainzmc/vllm
that referenced
this pull request Mar 12, 2025 Fix mla prefill context performance ( vllm-project#13897 ) … 21fa74b Signed-off-by: ZhongYingMatrix <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 Fix mla prefill context performance ( vllm-project#13897 ) … 45a9d2c Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Fix mla prefill context performance ( vllm-project#13897 ) … 6ac6947 Signed-off-by: ZhongYingMatrix <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:14
|
9badee53decb3d432dc805336abfb0eb81dfb48f
|
https://github.com/vllm-project/vllm/pull/14223
| false | true | true | true |
PERF: TTFT, TTFT, TTFT | SERVING: vllm serve, Serving, Serving | TEST: test, test, test
|
Copy link Member hmellor commented Mar 4, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Adds self.default_sampling_params to: OpenAIServingChat OpenAIServingCompletion OpenAIServingTranscription LLM As you can see from the benchmarks below, the performance difference is huge: vllm serve meta-llama/Llama-3.2-1B-Instruct --disable-log-requests --generation-config auto
python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.2-1B-Instruct --dataset-path ShareGPT_V3_unfiltered_cleaned_split.json Before: ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 149.29
Total input tokens: 215196
Total generated tokens: 179873
Request throughput (req/s): 6.70
Output token throughput (tok/s): 1204.82
Total Token throughput (tok/s): 2646.24
---------------Time to First Token----------------
Mean TTFT (ms): 124792.06
Median TTFT (ms): 123725.39
P99 TTFT (ms): 138387.36
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 40.52
Median TPOT (ms): 40.52
P99 TPOT (ms): 67.62
---------------Inter-token Latency----------------
Mean ITL (ms): 36.56
Median ITL (ms): 37.74
P99 ITL (ms): 72.37
================================================== After: ============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 34.24
Total input tokens: 215196
Total generated tokens: 178861
Request throughput (req/s): 29.21
Output token throughput (tok/s): 5224.41
Total Token throughput (tok/s): 11510.15
---------------Time to First Token----------------
Mean TTFT (ms): 8481.82
Median TTFT (ms): 7455.52
P99 TTFT (ms): 21150.72
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 37.85
Median TPOT (ms): 37.10
P99 TPOT (ms): 51.13
---------------Inter-token Latency----------------
Mean ITL (ms): 35.43
Median ITL (ms): 35.88
P99 ITL (ms): 72.53
================================================== Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions hmellor added 2 commits March 4, 2025 17:21 Prevent reads from disk at runtime when --generation-config auto is… … accf38d … set
Signed-off-by: Harry Mellor <[email protected]> Don't create a footgun … e3cd61e Signed-off-by: Harry Mellor <[email protected]> Copy link github-actions bot commented Mar 4, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the frontend label Mar 4, 2025 mgoin requested review from njhill and robertgshaw2-redhat March 4, 2025 16:49 mgoin added
the performance Performance-related issues label Mar 4, 2025 mgoin changed the title Fix generation config arg Fix performance of --generation-config auto Mar 4, 2025 mgoin approved these changes Mar 4, 2025 View reviewed changes Copy link Member mgoin left a comment • edited Loading Uh oh! There was an error while loading. Please reload this page . There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good catch, this is critical to fix as try_get_generation_config could be called for each request 😓 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Mar 4, 2025 hmellor changed the title Fix performance of --generation-config auto Fix performance of --generation-config is not None Mar 4, 2025 Copy link Member Author hmellor commented Mar 4, 2025 Thanks for updating the title, technically --generation-config could be a file path (which would also cause this performance problem) 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Make mypy happy … 71e1cf1 Signed-off-by: Harry Mellor <[email protected]> hmellor changed the title Fix performance of --generation-config is not None Fix performance when --generation-config is not None Mar 4, 2025 DarkLight1337 approved these changes Mar 4, 2025 View reviewed changes hmellor mentioned this pull request Mar 4, 2025 Default to generation_config from model #12622 Merged Hide details View details hmellor merged commit 9badee5 into vllm-project : main Mar 4, 2025 37 checks passed Uh oh! There was an error while loading. Please reload this page . hmellor deleted the fix-generation-config-arg branch March 4, 2025 19:59 Copy link Contributor yansh97 commented Mar 5, 2025 Very nice fix!!! Since "--generation-config was added", I have noticed a performance improvement when set to None, but a regression when set to "auto". I thought the reason is some changes in the sampling code. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Alexei-V-Ivanov-AMD added a commit
to ROCm/vllm
that referenced
this pull request Mar 11, 2025 Merging in the latest merge from vllm-project to ROCm ( #472 ) … a699a11 * Fix `head_dim` not existing in all model configs (Transformers backend) ( vllm-project#14141 )
Signed-off-by: Harry Mellor <[email protected]>
* [V0][Metrics] Remove unimplemented `vllm:tokens_total` ( vllm-project#14134 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V0][Metrics] Deprecate some KV/prefix cache metrics ( vllm-project#14136 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1] Simplify stats logging ( vllm-project#14082 )
Signed-off-by: Nick Hill <[email protected]>
* [WIP][[V1][Metrics] Implement max_num_generation_tokens, request_params_n, and request_params_max_tokens metrics ( vllm-project#14055 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 ( vllm-project#14100 )
Signed-off-by: mgoin <[email protected]>
* [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add GPTQModel ( vllm-project#14056 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [v1] Add comments to the new ragged paged attention Pallas kernel ( vllm-project#14155 )
Signed-off-by: Xiongfei Wei <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Model] Add support for GraniteMoeShared models ( vllm-project#13313 )
Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [core] moe fp8 block quant tuning support ( vllm-project#14068 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Remove lru_cache in NvmlCudaPlatform ( vllm-project#14156 )
Signed-off-by: Cody Yu <[email protected]>
* [core] Pass all driver env vars to ray workers unless excluded ( vllm-project#14099 )
Signed-off-by: Rui Qiao <[email protected]>
* Use math.prod instead of np.prod for trivial ops ( vllm-project#14142 )
* Fix benchmark_moe.py tuning for CUDA devices ( vllm-project#14164 )
* [platform] add debug logging during inferring the device type ( vllm-project#14195 )
Signed-off-by: youkaichao <[email protected]>
* [sleep mode] error out with expandable_segments ( vllm-project#14189 )
Signed-off-by: youkaichao <[email protected]>
* [doc] add "Failed to infer device type" to faq ( vllm-project#14200 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Restrict MacOS CPU detection ( vllm-project#14210 )
Signed-off-by: mgoin <[email protected]>
* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs ( vllm-project#13869 )
Signed-off-by: Nick Hill <[email protected]>
* [V0][Metrics] Deprecate some questionable request time metrics ( vllm-project#14135 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py ( vllm-project#14161 )
* add cutlass support for blackwell fp8 gemm ( vllm-project#13798 )
* [TPU][Profiler] Support start_profile/stop_profile in TPU worker ( vllm-project#13988 )
Signed-off-by: Siyuan Liu <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Fix performance when `--generation-config` is not `None` ( vllm-project#14223 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions ( vllm-project#14225 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Update Dockerfile dependency image ( vllm-project#14215 )
Signed-off-by: mgoin <[email protected]>
* [v1][Metrics] Add design doc ( vllm-project#12745 )
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe ( vllm-project#14228 )
Signed-off-by: KuntaiDu <[email protected]>
* Clean up unused padding_idx variables across many model definitions ( vllm-project#13240 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [ROCm] Disable a few more kernel tests that are broken on ROCm ( vllm-project#14145 )
Signed-off-by: Sage Moore <[email protected]>
* [V1][TPU] TPU multimodal model support for ragged attention ( vllm-project#14158 )
Signed-off-by: Michael Goin <[email protected]>
* [misc] announce china meetup ( vllm-project#14248 )
Signed-off-by: youkaichao <[email protected]>
* Moved numba from common requirements to cuda/rocm specific requirements ( vllm-project#14199 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 ( vllm-project#14157 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Fix gptq_marlin for deepseek-v3 ( vllm-project#13750 )
Signed-off-by: dangshunya <[email protected]>
Co-authored-by: dangshunya <[email protected]>
* [V1][Bugfix] Do not reset prefix caching metrics ( vllm-project#14235 )
* [Model] New model support for Phi-4-multimodal-instruct ( vllm-project#14119 )
* [V1] EP/TP MoE + DP Attention ( vllm-project#13931 )
* [platforms] improve rocm debugging info ( vllm-project#14257 )
* Temporarily disable test_awq_gemm_opcheck ( vllm-project#14251 )
Signed-off-by: mgoin <[email protected]>
* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param ( vllm-project#14066 )
Signed-off-by: Benjamin Chislett <[email protected]>
* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing ( vllm-project#14256 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler ( vllm-project#14169 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID ( vllm-project#14217 )
Signed-off-by: Iacopo Poli <[email protected]>
* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor ( vllm-project#14278 )
Signed-off-by: DarkLight1337 <[email protected]>
* Small update for external_launcher backend docs ( vllm-project#14288 )
* [V1][Frontend] Add Testing For V1 Runtime Parameters ( vllm-project#14159 )
Signed-off-by: [email protected] <[email protected]>
* [LoRA] Remove linear hack outside transformers backend ( vllm-project#14177 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Add Qwen2MoeForCausalLM moe tuning support ( vllm-project#14276 )
Signed-off-by: Jee Jee Li <[email protected]>
* prefix_caching.md: Fixed typo ( vllm-project#14293 )
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
* [Bugfix] Fix broken vision language example ( vllm-project#14292 )
Signed-off-by: Isotr0py <[email protected]>
* [Docs] Add Meta Slides ( vllm-project#14297 )
Signed-off-by: simon-mo <[email protected]>
* [V1][Minor] Remove obsolete FIXME comment ( vllm-project#14304 )
Signed-off-by: Nick Hill <[email protected]>
* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 ( vllm-project#13997 )
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [V1][BugFix] Fix for mixed top_k batch ( vllm-project#14301 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env ( vllm-project#14267 )
* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test ( vllm-project#14308 )
Signed-off-by: Lu Fang <[email protected]>
* init
Signed-off-by: Sage Moore <[email protected]>
* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch ( vllm-project#14237 )
Signed-off-by: pyc96 <[email protected]>
* [Bugfix] Remove num_tokens_across_dp ( vllm-project#14302 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [BugFix] Fix prefix caching V0 MLA ( vllm-project#14255 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline ( vllm-project#14243 )
Signed-off-by: Russell Bryant <[email protected]>
* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM ( vllm-project#13917 )
Signed-off-by: mgoin <[email protected]>
* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation ( vllm-project#13850 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] MLA + V1, illegal memory access and accuracy issues ( vllm-project#14253 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Mention `ray list nodes` command to troubleshoot ray issues ( vllm-project#14318 )
Signed-off-by: Rui Qiao <[email protected]>
* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 ( vllm-project#14114 )
* [V1] LoRA - Enable more V1 tests ( vllm-project#14315 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention ( vllm-project#11301 )
* [Hardware] Update the flash attn tag to support Blackwell ( vllm-project#14244 )
* [Model] Update Paligemma multimodal processing with PromptUpdate ( vllm-project#14015 )
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 ( vllm-project#14275 )
Signed-off-by: Linkun Chen <[email protected]>
* [Core] Optimizing cross-attention `QKVParallelLinear` computation ( vllm-project#12325 )
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Co-authored-by: NickLucche <[email protected]>
* [Frontend][Docs] Transcription API streaming ( vllm-project#13301 )
Signed-off-by: NickLucche <[email protected]>
* [Doc] Update reasoning with stream example to use OpenAI library ( vllm-project#14077 )
Signed-off-by: liuyanyi <[email protected]>
* [Doc] Correct beam_search using in generative_models.md ( vllm-project#14363 )
* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend ( vllm-project#14152 )
* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 ( vllm-project#14326 )
Signed-off-by: courage17340 <[email protected]>
* [Core] Don't use cache during multi-modal profiling ( vllm-project#14336 )
* [Doc] Fix date typo in README.md ( vllm-project#14366 )
Signed-off-by: Jitse Klomp <[email protected]>
* [RLHF] use worker_extension_cls for compatibility with V0 and V1 ( vllm-project#14185 )
Signed-off-by: youkaichao <[email protected]>
* Reinstate `best_of` for V0 ( vllm-project#14356 )
Signed-off-by: Harry Mellor <[email protected]>
* Adding cpu inference with VXE ISA for s390x architecture ( vllm-project#12613 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
* Add authors to license header. ( vllm-project#14371 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
* Fix mla prefill context performance ( vllm-project#13897 )
Signed-off-by: ZhongYingMatrix <[email protected]>
* [V1] Do not detokenize if sampling param detokenize is False ( vllm-project#14224 )
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Distributed] Add enable_expert_parallel arg ( vllm-project#14305 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa ( vllm-project#13569 )
Signed-off-by: mgoin <[email protected]>
* [CI] Disable spawn when running V1 Test ( vllm-project#14345 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Kernel] Add needs_fixed_stride_order tag to most GEMMs ( vllm-project#14306 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bugfix] Fix use_direct_call condition in FusedMoE layer for ( vllm-project#14382 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bug] Fix Attention when ignored in by quant_method ( vllm-project#14313 )
Signed-off-by: mgoin <[email protected]>
* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends ( vllm-project#14221 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add nsight guide to profiling docs ( vllm-project#14298 )
Signed-off-by: mgoin <[email protected]>
* cleanup boolean logic
Signed-off-by: Sage Moore <[email protected]>
* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue ( vllm-project#14310 )
Signed-off-by: Chengji Yao <[email protected]>
* [Doc] Fix a typo ( vllm-project#14385 )
* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script ( vllm-project#14183 )
Signed-off-by: Brayden Zhong <[email protected]>
* [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object ( vllm-project#14390 )
Signed-off-by: luka <[email protected]>
* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs ( vllm-project#14396 )
* [Bugfix] Fix JambaForCausalLM LoRA ( vllm-project#14370 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Build] Add nightly wheel fallback when latest commit wheel unavailable ( vllm-project#14358 )
Signed-off-by: Isotr0py <[email protected]>
* OpenVINO: added CPU-like conditions ( vllm-project#14338 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [GH] Auto-apply multi-modality label to relevant PRs ( vllm-project#14402 )
Signed-off-by: DarkLight1337 <[email protected]>
* correct wrong markdown syntax ( vllm-project#14414 )
Signed-off-by: vincent-pli <[email protected]>
* [Bugfix] Further clean up LoRA test ( vllm-project#14422 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Clean up multi-modal processors ( vllm-project#14417 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Set default value of seed to None ( vllm-project#14274 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [BUGFIX] Skip tokenization support for throughput benchmark ( vllm-project#12712 )
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` ( vllm-project#14271 )
Signed-off-by: Harry Mellor <[email protected]>
* Use the optimized block sizes after tuning the kernel. ( vllm-project#14329 )
* [V1][Core] Support for Structured Outputs ( vllm-project#12388 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update prefix_caching.md to match the example image ( vllm-project#14420 )
* [Benchmarks] Make detokenization optional in benchmark scripts ( vllm-project#11697 )
Signed-off-by: Jeremy Arnold <[email protected]>
* comments
Signed-off-by: Sage Moore <[email protected]>
* [Kernel] optimize performance of gptq marlin kernel when n is small ( vllm-project#14138 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [Misc] Add Phi4-MM example ( vllm-project#14343 )
Signed-off-by: Jee Jee Li <[email protected]>
* [v1] torch.compile integration explanation ( vllm-project#14437 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Eagerly remove finished requests from the batch ( vllm-project#14388 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][Metrics] Fix traceback with preemptions+LoRA ( vllm-project#14220 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 ( vllm-project#14459 )
Signed-off-by: Yarong Mu <[email protected]>
* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC ( vllm-project#13949 )
* [Bugfix][V1] Handle MLA in kv_cache_interface ( vllm-project#14462 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( vllm-project#14471 )
* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache ( vllm-project#14369 )
Signed-off-by: Mathis Felardos <[email protected]>
* [MISC][V1] Register process killing handler only in the main thread ( vllm-project#14380 )
Signed-off-by: Cody Yu <[email protected]>
* [core] add `extra_args` to `SamplingParams` ( vllm-project#13300 )
Signed-off-by: Aviv Keshet <[email protected]>
* [CI/Build] refactor: set timezone of container to UTC ( vllm-project#12888 )
Signed-off-by: Roger Meier <[email protected]>
* Default to `generation_config` from model ( vllm-project#12622 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc]add doc for Qwen models tool calling ( vllm-project#14478 )
Signed-off-by: WangErXiao <[email protected]>
* [Doc] Added QwQ-32B to the supported models list in the reasoning out… ( vllm-project#14479 )
Signed-off-by: WangErXiao <[email protected]>
* [Bugfix] Make the deviceprofiler include LoRA memory. ( vllm-project#14469 )
Signed-off-by: Jee Jee Li <[email protected]>
* Add training doc signposting to TRL ( vllm-project#14439 )
Signed-off-by: Harry Mellor <[email protected]>
* [Build/BugFix] Fix hopper 12.8 build ( vllm-project#14354 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add RLHF document ( vllm-project#14482 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Use a fixed seed to avoid flaky tests ( vllm-project#14480 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] TPU - Add tensor parallel support via Ray ( vllm-project#13618 )
Signed-off-by: Alexander Matveev <[email protected]>
* [VLM] Add TP support for Phi-4-MM ( vllm-project#14453 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] add `use_tqdm_on_load` to reduce logs ( vllm-project#14407 )
Signed-off-by: Aaron Pham <[email protected]>
* [V1][Core] Fix memory issue with logits & sampling ( vllm-project#13776 )
Signed-off-by: Roger Wang <[email protected]>
* [benchmarks] Add option to use unique jsonschema for each request ( vllm-project#14457 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Don't run ruff at all on 3rd party libs ( vllm-project#14493 )
Signed-off-by: DarkLight1337 <[email protected]>
* Move requirements into their own directory ( vllm-project#12547 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] DeepSeek Accuracy ( vllm-project#14476 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling ( vllm-project#14361 )
Signed-off-by: Isotr0py <[email protected]>
* Update CODEOWNERS for structured output ( vllm-project#14496 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Upgrade to Python 3.9 typing for additional directories ( vllm-project#14492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support bad_words in sampler ( vllm-project#13376 )
Signed-off-by: 22quinn <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* Revert "[V1][Core] Fix memory issue with logits & sampling" ( vllm-project#14504 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Attention] Default to FlashMLA backend for MLA ( vllm-project#14451 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [V1][TPU] Remove unnecessary padding for running on TPU. ( vllm-project#14467 )
* [Feat] Support chunked prefill for LMCache connector ( vllm-project#14505 )
Signed-off-by: YaoJiayi <[email protected]>
* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 ( vllm-project#12428 )
Signed-off-by: Yuchen Yan <[email protected]>
* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work ( vllm-project#14498 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup ( vllm-project#14510 )
Signed-off-by: Chengji Yao <[email protected]>
* [Misc] Ensure out-of-tree quantization method recognize by cli args ( vllm-project#14328 )
Signed-off-by: liuyanyi <[email protected]>
* [Bugfix] Wrong requirements path - rocm ( vllm-project#14527 )
Signed-off-by: Martin Hoyer <[email protected]>
* [Feature] Consolidate performance benchmark datasets ( vllm-project#14036 )
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add log information for handle_process_request. ( vllm-project#14130 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Docs] Mention `model_impl` arg when explaining Transformers fallback ( vllm-project#14552 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] support image embeds ( vllm-project#13955 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Kernel] Add more dtype support for GGUF kernels ( vllm-project#14043 )
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
* [Doc] Update PaliGemma note to a warning ( vllm-project#14565 )
Signed-off-by: DarkLight1337 <[email protected]>
* V1 rocm support ( #469 )
* Initial commit for V1 successfull compilation
* Small improvement for linear
* Small improvement for linear
* making use of forward_cuda for all except ROPE in llama
---------
Co-authored-by: maleksan85 <[email protected]>
* nightly_fixed_aiter_integration_final_20250305 README update ( #470 )
* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)
* Update Docker Manifest git hash
* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305
* some more updates
* Update AITER section with example
* Updated AITER command with larger batch size and model name
* Fixing typo
* Removed --max-model-len in AITER command
* Updating AITER instructions
* typo
* Another typo
* Whitespace
* modifying whats new section
* Another typo
---------
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Xiongfei Wei <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: dangshunya <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Iacopo Poli <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: pyc96 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: liuyanyi <[email protected]>
Signed-off-by: courage17340 <[email protected]>
Signed-off-by: Jitse Klomp <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Chengji Yao <[email protected]>
Signed-off-by: luka <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: vincent-pli <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Jeremy Arnold <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Yarong Mu <[email protected]>
Signed-off-by: Mathis Felardos <[email protected]>
Signed-off-by: Aviv Keshet <[email protected]>
Signed-off-by: Roger Meier <[email protected]>
Signed-off-by: WangErXiao <[email protected]>
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Yuchen Yan <[email protected]>
Signed-off-by: Martin Hoyer <[email protected]>
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: iefgnoix <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Zhanwen Chen <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: kushanam <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: rainkert <[email protected]>
Co-authored-by: dangshunya <[email protected]>
Co-authored-by: Congcong Chen <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Iacopo Poli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: DaividFrank <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Vincent <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
Co-authored-by: Serena <[email protected]>
Co-authored-by: pyc96 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: kYLe <[email protected]>
Co-authored-by: NickLucche <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Irina Yuryeva <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: courage17340 <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
Co-authored-by: Himanshu Jaju <[email protected]>
Co-authored-by: Chengji Yao <[email protected]>
Co-authored-by: Daniel Li <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Peng Li <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: York-RDWang <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: yarongmu-google <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Mathis Felardos <[email protected]>
Co-authored-by: Aviv Keshet <[email protected]>
Co-authored-by: Roger Meier <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: 22quinn <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Yuchen Yan <[email protected]>
Co-authored-by: Martin Hoyer <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Mcirino1 <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 Fix performance when --generation-config is not None ( vllm-projec… … be31e4d …t#14223 )
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Fix performance when --generation-config is not None ( vllm-projec… … b12de09 …t#14223 )
Signed-off-by: Harry Mellor <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:18
|
19d98e0c7db96713f0e2201649159431177a56e2
|
https://github.com/vllm-project/vllm/pull/13625
| true | true | true | true |
LM_EVAL: gsm8k | PERF: throughput, Improvement, improvement | SERVING: Frontend, Frontend, Frontend | TEST: test, test, test
|
Copy link Member mgoin commented Feb 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . I'm running into OOM issues at long sequence lengths with deepseek r1, so exploring options here (see o3-mini chat ). First I tried moving the silu+mul to be an inplace operation via a new kernel torch.ops._C.silu_and_mul_inplace(intermediate_cache1.view(-1, N)) , but it seems easier to reuse memory for cache1 and cache3 since there is absolutely no data dependency there Manual peak measurement shows 15% reduction in memory for fused_moe for 64k prefill Eval: vllm (pretrained=deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7642|± |0.0117|
| | |strict-match | 5|exact_match|↑ |0.7468|± |0.0120| Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 simon-mo reacted with thumbs up emoji All reactions 👍 1 reaction Optimize moe intermediate_cache allocation … 85baec6 Signed-off-by: mgoin <[email protected]> Copy link github-actions bot commented Feb 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Improvement … ab14d0e Signed-off-by: mgoin <[email protected]> mgoin marked this pull request as ready for review February 20, 2025 20:58 mgoin changed the title Optimize moe intermediate_cache usage [Kernel] Optimize moe intermediate_cache usage Feb 20, 2025 Merge branch 'main' into fused-moe-reuse-intermediate-cache d222413 mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 25, 2025 mgoin requested review from LucasWilkinson and tlrmchlsmth February 25, 2025 17:05 Copy link Member Author mgoin commented Feb 25, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Manual peak measurement shows 15% reduction in memory for fused_moe for 64k prefill main: python moe_mem.py
Memory usage: 3586 MB
Time: 5.0750 seconds This PR: python moe_mem.py
Memory usage: 3074 MB
Time: 5.0809 seconds import torch import time from vllm . model_executor . layers . fused_moe import fused_moe num_tokens = 64 * 1024 experts = 8 hidden_size = 4096 intermediate_size = 8192 topk = 2 torch . manual_seed ( 0 ) x = torch . randn (( num_tokens , hidden_size ), device = "cuda" , dtype = torch . float16 ) / 32 w1 = torch . randn (( experts , intermediate_size * 2 , hidden_size ), device = "cuda" , dtype = torch . float16 ) / 32 w2 = torch . randn (( experts , hidden_size , intermediate_size ), device = "cuda" , dtype = torch . float16 ) / 32 gating_output = torch . randn (( num_tokens , experts ), device = "cuda" , dtype = torch . float16 ) # Run once to get peak memory usage start_memory_mb = torch . cuda . max_memory_allocated () // ( 1024 * 1024 ) _ = fused_moe ( x , w1 , w2 , gating_output , topk , True ) end_memory_mb = torch . cuda . max_memory_allocated () // ( 1024 * 1024 ) print ( f"Memory usage: { end_memory_mb - start_memory_mb } MB" ) # Benchmark performance start = time . perf_counter () for _ in range ( 100 ): x = fused_moe ( x , w1 , w2 , gating_output , topk , True ) elapsed = time . perf_counter () - start print ( f"Time: { elapsed :.4f } seconds" ) All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . simon-mo added this to DeepSeek V3/R1 Feb 25, 2025 github-project-automation bot moved this to Backlog in DeepSeek V3/R1 Feb 25, 2025 simon-mo moved this from Backlog to In review in DeepSeek V3/R1 Feb 25, 2025 hmellor moved this from In review to In progress in DeepSeek V3/R1 Feb 28, 2025 tlrmchlsmth approved these changes Mar 3, 2025 View reviewed changes Hide details View details tlrmchlsmth merged commit 19d98e0 into vllm-project : main Mar 3, 2025 60 checks passed Uh oh! There was an error while loading. Please reload this page . github-project-automation bot moved this from In progress to Done in DeepSeek V3/R1 Mar 3, 2025 mgoin deleted the fused-moe-reuse-intermediate-cache branch March 3, 2025 22:50 Alexei-V-Ivanov-AMD added a commit
to ROCm/vllm
that referenced
this pull request Mar 11, 2025 Merging in the latest merge from vllm-project to ROCm ( #472 ) … a699a11 * Fix `head_dim` not existing in all model configs (Transformers backend) ( vllm-project#14141 )
Signed-off-by: Harry Mellor <[email protected]>
* [V0][Metrics] Remove unimplemented `vllm:tokens_total` ( vllm-project#14134 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V0][Metrics] Deprecate some KV/prefix cache metrics ( vllm-project#14136 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1] Simplify stats logging ( vllm-project#14082 )
Signed-off-by: Nick Hill <[email protected]>
* [WIP][[V1][Metrics] Implement max_num_generation_tokens, request_params_n, and request_params_max_tokens metrics ( vllm-project#14055 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 ( vllm-project#14100 )
Signed-off-by: mgoin <[email protected]>
* [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add GPTQModel ( vllm-project#14056 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [v1] Add comments to the new ragged paged attention Pallas kernel ( vllm-project#14155 )
Signed-off-by: Xiongfei Wei <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
* [Model] Add support for GraniteMoeShared models ( vllm-project#13313 )
Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [core] moe fp8 block quant tuning support ( vllm-project#14068 )
Signed-off-by: Divakar Verma <[email protected]>
* [Misc] Remove lru_cache in NvmlCudaPlatform ( vllm-project#14156 )
Signed-off-by: Cody Yu <[email protected]>
* [core] Pass all driver env vars to ray workers unless excluded ( vllm-project#14099 )
Signed-off-by: Rui Qiao <[email protected]>
* Use math.prod instead of np.prod for trivial ops ( vllm-project#14142 )
* Fix benchmark_moe.py tuning for CUDA devices ( vllm-project#14164 )
* [platform] add debug logging during inferring the device type ( vllm-project#14195 )
Signed-off-by: youkaichao <[email protected]>
* [sleep mode] error out with expandable_segments ( vllm-project#14189 )
Signed-off-by: youkaichao <[email protected]>
* [doc] add "Failed to infer device type" to faq ( vllm-project#14200 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Restrict MacOS CPU detection ( vllm-project#14210 )
Signed-off-by: mgoin <[email protected]>
* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs ( vllm-project#13869 )
Signed-off-by: Nick Hill <[email protected]>
* [V0][Metrics] Deprecate some questionable request time metrics ( vllm-project#14135 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py ( vllm-project#14161 )
* add cutlass support for blackwell fp8 gemm ( vllm-project#13798 )
* [TPU][Profiler] Support start_profile/stop_profile in TPU worker ( vllm-project#13988 )
Signed-off-by: Siyuan Liu <[email protected]>
Co-authored-by: mgoin <[email protected]>
* Fix performance when `--generation-config` is not `None` ( vllm-project#14223 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions ( vllm-project#14225 )
Signed-off-by: Harry Mellor <[email protected]>
* [Docs] Update Dockerfile dependency image ( vllm-project#14215 )
Signed-off-by: mgoin <[email protected]>
* [v1][Metrics] Add design doc ( vllm-project#12745 )
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe ( vllm-project#14228 )
Signed-off-by: KuntaiDu <[email protected]>
* Clean up unused padding_idx variables across many model definitions ( vllm-project#13240 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [ROCm] Disable a few more kernel tests that are broken on ROCm ( vllm-project#14145 )
Signed-off-by: Sage Moore <[email protected]>
* [V1][TPU] TPU multimodal model support for ragged attention ( vllm-project#14158 )
Signed-off-by: Michael Goin <[email protected]>
* [misc] announce china meetup ( vllm-project#14248 )
Signed-off-by: youkaichao <[email protected]>
* Moved numba from common requirements to cuda/rocm specific requirements ( vllm-project#14199 )
Signed-off-by: Nishidha Panpaliya <[email protected]>
* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 ( vllm-project#14157 )
Signed-off-by: mgoin <[email protected]>
* [Bugfix] Fix gptq_marlin for deepseek-v3 ( vllm-project#13750 )
Signed-off-by: dangshunya <[email protected]>
Co-authored-by: dangshunya <[email protected]>
* [V1][Bugfix] Do not reset prefix caching metrics ( vllm-project#14235 )
* [Model] New model support for Phi-4-multimodal-instruct ( vllm-project#14119 )
* [V1] EP/TP MoE + DP Attention ( vllm-project#13931 )
* [platforms] improve rocm debugging info ( vllm-project#14257 )
* Temporarily disable test_awq_gemm_opcheck ( vllm-project#14251 )
Signed-off-by: mgoin <[email protected]>
* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param ( vllm-project#14066 )
Signed-off-by: Benjamin Chislett <[email protected]>
* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing ( vllm-project#14256 )
Signed-off-by: Roger Wang <[email protected]>
* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler ( vllm-project#14169 )
Signed-off-by: Lu Fang <[email protected]>
* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID ( vllm-project#14217 )
Signed-off-by: Iacopo Poli <[email protected]>
* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor ( vllm-project#14278 )
Signed-off-by: DarkLight1337 <[email protected]>
* Small update for external_launcher backend docs ( vllm-project#14288 )
* [V1][Frontend] Add Testing For V1 Runtime Parameters ( vllm-project#14159 )
Signed-off-by: [email protected] <[email protected]>
* [LoRA] Remove linear hack outside transformers backend ( vllm-project#14177 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] Add Qwen2MoeForCausalLM moe tuning support ( vllm-project#14276 )
Signed-off-by: Jee Jee Li <[email protected]>
* prefix_caching.md: Fixed typo ( vllm-project#14293 )
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
* [Bugfix] Fix broken vision language example ( vllm-project#14292 )
Signed-off-by: Isotr0py <[email protected]>
* [Docs] Add Meta Slides ( vllm-project#14297 )
Signed-off-by: simon-mo <[email protected]>
* [V1][Minor] Remove obsolete FIXME comment ( vllm-project#14304 )
Signed-off-by: Nick Hill <[email protected]>
* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 ( vllm-project#13997 )
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
* [V1][BugFix] Fix for mixed top_k batch ( vllm-project#14301 )
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env ( vllm-project#14267 )
* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test ( vllm-project#14308 )
Signed-off-by: Lu Fang <[email protected]>
* init
Signed-off-by: Sage Moore <[email protected]>
* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch ( vllm-project#14237 )
Signed-off-by: pyc96 <[email protected]>
* [Bugfix] Remove num_tokens_across_dp ( vllm-project#14302 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [BugFix] Fix prefix caching V0 MLA ( vllm-project#14255 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline ( vllm-project#14243 )
Signed-off-by: Russell Bryant <[email protected]>
* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM ( vllm-project#13917 )
Signed-off-by: mgoin <[email protected]>
* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation ( vllm-project#13850 )
Signed-off-by: Yuan Tang <[email protected]>
* [BugFix] MLA + V1, illegal memory access and accuracy issues ( vllm-project#14253 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Mention `ray list nodes` command to troubleshoot ray issues ( vllm-project#14318 )
Signed-off-by: Rui Qiao <[email protected]>
* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 ( vllm-project#14114 )
* [V1] LoRA - Enable more V1 tests ( vllm-project#14315 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention ( vllm-project#11301 )
* [Hardware] Update the flash attn tag to support Blackwell ( vllm-project#14244 )
* [Model] Update Paligemma multimodal processing with PromptUpdate ( vllm-project#14015 )
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 ( vllm-project#14275 )
Signed-off-by: Linkun Chen <[email protected]>
* [Core] Optimizing cross-attention `QKVParallelLinear` computation ( vllm-project#12325 )
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Co-authored-by: NickLucche <[email protected]>
* [Frontend][Docs] Transcription API streaming ( vllm-project#13301 )
Signed-off-by: NickLucche <[email protected]>
* [Doc] Update reasoning with stream example to use OpenAI library ( vllm-project#14077 )
Signed-off-by: liuyanyi <[email protected]>
* [Doc] Correct beam_search using in generative_models.md ( vllm-project#14363 )
* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend ( vllm-project#14152 )
* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 ( vllm-project#14326 )
Signed-off-by: courage17340 <[email protected]>
* [Core] Don't use cache during multi-modal profiling ( vllm-project#14336 )
* [Doc] Fix date typo in README.md ( vllm-project#14366 )
Signed-off-by: Jitse Klomp <[email protected]>
* [RLHF] use worker_extension_cls for compatibility with V0 and V1 ( vllm-project#14185 )
Signed-off-by: youkaichao <[email protected]>
* Reinstate `best_of` for V0 ( vllm-project#14356 )
Signed-off-by: Harry Mellor <[email protected]>
* Adding cpu inference with VXE ISA for s390x architecture ( vllm-project#12613 )
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
* Add authors to license header. ( vllm-project#14371 )
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
* Fix mla prefill context performance ( vllm-project#13897 )
Signed-off-by: ZhongYingMatrix <[email protected]>
* [V1] Do not detokenize if sampling param detokenize is False ( vllm-project#14224 )
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Distributed] Add enable_expert_parallel arg ( vllm-project#14305 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa ( vllm-project#13569 )
Signed-off-by: mgoin <[email protected]>
* [CI] Disable spawn when running V1 Test ( vllm-project#14345 )
Signed-off-by: Thomas Parnell <[email protected]>
* [Kernel] Add needs_fixed_stride_order tag to most GEMMs ( vllm-project#14306 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bugfix] Fix use_direct_call condition in FusedMoE layer for ( vllm-project#14382 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* [Bug] Fix Attention when ignored in by quant_method ( vllm-project#14313 )
Signed-off-by: mgoin <[email protected]>
* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends ( vllm-project#14221 )
Signed-off-by: mgoin <[email protected]>
* [Docs] Add nsight guide to profiling docs ( vllm-project#14298 )
Signed-off-by: mgoin <[email protected]>
* cleanup boolean logic
Signed-off-by: Sage Moore <[email protected]>
* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue ( vllm-project#14310 )
Signed-off-by: Chengji Yao <[email protected]>
* [Doc] Fix a typo ( vllm-project#14385 )
* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script ( vllm-project#14183 )
Signed-off-by: Brayden Zhong <[email protected]>
* [Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object ( vllm-project#14390 )
Signed-off-by: luka <[email protected]>
* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs ( vllm-project#14396 )
* [Bugfix] Fix JambaForCausalLM LoRA ( vllm-project#14370 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Build] Add nightly wheel fallback when latest commit wheel unavailable ( vllm-project#14358 )
Signed-off-by: Isotr0py <[email protected]>
* OpenVINO: added CPU-like conditions ( vllm-project#14338 )
Signed-off-by: Ilya Lavrenov <[email protected]>
* [GH] Auto-apply multi-modality label to relevant PRs ( vllm-project#14402 )
Signed-off-by: DarkLight1337 <[email protected]>
* correct wrong markdown syntax ( vllm-project#14414 )
Signed-off-by: vincent-pli <[email protected]>
* [Bugfix] Further clean up LoRA test ( vllm-project#14422 )
Signed-off-by: Jee Jee Li <[email protected]>
* [Bugfix] Clean up multi-modal processors ( vllm-project#14417 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Misc] Set default value of seed to None ( vllm-project#14274 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [BUGFIX] Skip tokenization support for throughput benchmark ( vllm-project#12712 )
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` ( vllm-project#14271 )
Signed-off-by: Harry Mellor <[email protected]>
* Use the optimized block sizes after tuning the kernel. ( vllm-project#14329 )
* [V1][Core] Support for Structured Outputs ( vllm-project#12388 )
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update prefix_caching.md to match the example image ( vllm-project#14420 )
* [Benchmarks] Make detokenization optional in benchmark scripts ( vllm-project#11697 )
Signed-off-by: Jeremy Arnold <[email protected]>
* comments
Signed-off-by: Sage Moore <[email protected]>
* [Kernel] optimize performance of gptq marlin kernel when n is small ( vllm-project#14138 )
Signed-off-by: Jinzhen Lin <[email protected]>
* [Misc] Add Phi4-MM example ( vllm-project#14343 )
Signed-off-by: Jee Jee Li <[email protected]>
* [v1] torch.compile integration explanation ( vllm-project#14437 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Eagerly remove finished requests from the batch ( vllm-project#14388 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][Metrics] Fix traceback with preemptions+LoRA ( vllm-project#14220 )
Signed-off-by: Mark McLoughlin <[email protected]>
* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 ( vllm-project#14459 )
Signed-off-by: Yarong Mu <[email protected]>
* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC ( vllm-project#13949 )
* [Bugfix][V1] Handle MLA in kv_cache_interface ( vllm-project#14462 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "[Perf] Reduce MLA CPU overheads in V1 ( vllm-project#14384 )" ( vllm-project#14471 )
* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache ( vllm-project#14369 )
Signed-off-by: Mathis Felardos <[email protected]>
* [MISC][V1] Register process killing handler only in the main thread ( vllm-project#14380 )
Signed-off-by: Cody Yu <[email protected]>
* [core] add `extra_args` to `SamplingParams` ( vllm-project#13300 )
Signed-off-by: Aviv Keshet <[email protected]>
* [CI/Build] refactor: set timezone of container to UTC ( vllm-project#12888 )
Signed-off-by: Roger Meier <[email protected]>
* Default to `generation_config` from model ( vllm-project#12622 )
Signed-off-by: Harry Mellor <[email protected]>
* [Doc]add doc for Qwen models tool calling ( vllm-project#14478 )
Signed-off-by: WangErXiao <[email protected]>
* [Doc] Added QwQ-32B to the supported models list in the reasoning out… ( vllm-project#14479 )
Signed-off-by: WangErXiao <[email protected]>
* [Bugfix] Make the deviceprofiler include LoRA memory. ( vllm-project#14469 )
Signed-off-by: Jee Jee Li <[email protected]>
* Add training doc signposting to TRL ( vllm-project#14439 )
Signed-off-by: Harry Mellor <[email protected]>
* [Build/BugFix] Fix hopper 12.8 build ( vllm-project#14354 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* Add RLHF document ( vllm-project#14482 )
Signed-off-by: Harry Mellor <[email protected]>
* [CI/Build] Use a fixed seed to avoid flaky tests ( vllm-project#14480 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] TPU - Add tensor parallel support via Ray ( vllm-project#13618 )
Signed-off-by: Alexander Matveev <[email protected]>
* [VLM] Add TP support for Phi-4-MM ( vllm-project#14453 )
Signed-off-by: Isotr0py <[email protected]>
* [Misc] add `use_tqdm_on_load` to reduce logs ( vllm-project#14407 )
Signed-off-by: Aaron Pham <[email protected]>
* [V1][Core] Fix memory issue with logits & sampling ( vllm-project#13776 )
Signed-off-by: Roger Wang <[email protected]>
* [benchmarks] Add option to use unique jsonschema for each request ( vllm-project#14457 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Don't run ruff at all on 3rd party libs ( vllm-project#14493 )
Signed-off-by: DarkLight1337 <[email protected]>
* Move requirements into their own directory ( vllm-project#12547 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] DeepSeek Accuracy ( vllm-project#14476 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling ( vllm-project#14361 )
Signed-off-by: Isotr0py <[email protected]>
* Update CODEOWNERS for structured output ( vllm-project#14496 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] Upgrade to Python 3.9 typing for additional directories ( vllm-project#14492 )
Signed-off-by: DarkLight1337 <[email protected]>
* [V1] Support bad_words in sampler ( vllm-project#13376 )
Signed-off-by: 22quinn <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* Revert "[V1][Core] Fix memory issue with logits & sampling" ( vllm-project#14504 )
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Attention] Default to FlashMLA backend for MLA ( vllm-project#14451 )
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [V1][TPU] Remove unnecessary padding for running on TPU. ( vllm-project#14467 )
* [Feat] Support chunked prefill for LMCache connector ( vllm-project#14505 )
Signed-off-by: YaoJiayi <[email protected]>
* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 ( vllm-project#12428 )
Signed-off-by: Yuchen Yan <[email protected]>
* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work ( vllm-project#14498 )
Signed-off-by: Isotr0py <[email protected]>
* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup ( vllm-project#14510 )
Signed-off-by: Chengji Yao <[email protected]>
* [Misc] Ensure out-of-tree quantization method recognize by cli args ( vllm-project#14328 )
Signed-off-by: liuyanyi <[email protected]>
* [Bugfix] Wrong requirements path - rocm ( vllm-project#14527 )
Signed-off-by: Martin Hoyer <[email protected]>
* [Feature] Consolidate performance benchmark datasets ( vllm-project#14036 )
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Add log information for handle_process_request. ( vllm-project#14130 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Docs] Mention `model_impl` arg when explaining Transformers fallback ( vllm-project#14552 )
Signed-off-by: Harry Mellor <[email protected]>
* [Frontend] support image embeds ( vllm-project#13955 )
Signed-off-by: chaunceyjiang <[email protected]>
* [Kernel] Add more dtype support for GGUF kernels ( vllm-project#14043 )
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
* [Doc] Update PaliGemma note to a warning ( vllm-project#14565 )
Signed-off-by: DarkLight1337 <[email protected]>
* V1 rocm support ( #469 )
* Initial commit for V1 successfull compilation
* Small improvement for linear
* Small improvement for linear
* making use of forward_cuda for all except ROPE in llama
---------
Co-authored-by: maleksan85 <[email protected]>
* nightly_fixed_aiter_integration_final_20250305 README update ( #470 )
* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)
* Update Docker Manifest git hash
* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305
* some more updates
* Update AITER section with example
* Updated AITER command with larger batch size and model name
* Fixing typo
* Removed --max-model-len in AITER command
* Updating AITER instructions
* typo
* Another typo
* Whitespace
* modifying whats new section
* Another typo
---------
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Xiongfei Wei <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: KuntaiDu <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Michael Goin <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: dangshunya <[email protected]>
Signed-off-by: Benjamin Chislett <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Iacopo Poli <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Daivid Savernin-Frenk <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: vincent-4 <[email protected]>
Signed-off-by: Brayden Zhong <[email protected]>
Signed-off-by: pyc96 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Kyle Huang <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: liuyanyi <[email protected]>
Signed-off-by: courage17340 <[email protected]>
Signed-off-by: Jitse Klomp <[email protected]>
Signed-off-by: Dilip Gowda Bhagavan <[email protected]>
Signed-off-by: Rishika Kedia <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: ZhongYingMatrix <[email protected]>
Signed-off-by: Himanshu Jaju <[email protected]>
Signed-off-by: Chengji Yao <[email protected]>
Signed-off-by: luka <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: vincent-pli <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Aaron Pham <[email protected]>
Signed-off-by: Jeremy Arnold <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Yarong Mu <[email protected]>
Signed-off-by: Mathis Felardos <[email protected]>
Signed-off-by: Aviv Keshet <[email protected]>
Signed-off-by: Roger Meier <[email protected]>
Signed-off-by: WangErXiao <[email protected]>
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: 22quinn <[email protected]>
Signed-off-by: YaoJiayi <[email protected]>
Signed-off-by: Yuchen Yan <[email protected]>
Signed-off-by: Martin Hoyer <[email protected]>
Signed-off-by: Jennifer Zhao <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Signed-off-by: SzymonOzog <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: iefgnoix <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Zhanwen Chen <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: lkchen <[email protected]>
Co-authored-by: kushanam <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: rainkert <[email protected]>
Co-authored-by: dangshunya <[email protected]>
Co-authored-by: Congcong Chen <[email protected]>
Co-authored-by: Benjamin Chislett <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Iacopo Poli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: DaividFrank <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Vincent <[email protected]>
Co-authored-by: Brayden Zhong <[email protected]>
Co-authored-by: Ye Cao <[email protected]>
Co-authored-by: Serena <[email protected]>
Co-authored-by: pyc96 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Ying Zhong <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: Pavani Majety <[email protected]>
Co-authored-by: kYLe <[email protected]>
Co-authored-by: NickLucche <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Irina Yuryeva <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: courage17340 <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Dilip Gowda Bhagavan <[email protected]>
Co-authored-by: Rishika Kedia <[email protected]>
Co-authored-by: Burkhard Ringlein <[email protected]>
Co-authored-by: Jan van Lunteren <[email protected]>
Co-authored-by: Himanshu Jaju <[email protected]>
Co-authored-by: Chengji Yao <[email protected]>
Co-authored-by: Daniel Li <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Peng Li <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Co-authored-by: York-RDWang <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: yarongmu-google <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Mathis Felardos <[email protected]>
Co-authored-by: Aviv Keshet <[email protected]>
Co-authored-by: Roger Meier <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: 22quinn <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Jiayi Yao <[email protected]>
Co-authored-by: Yuchen Yan <[email protected]>
Co-authored-by: Martin Hoyer <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Jennifer Zhao <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Mcirino1 <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 ) … 553034e Signed-off-by: mgoin <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Kernel] Optimize moe intermediate_cache usage ( vllm-project#13625 ) … a0341c1 Signed-off-by: mgoin <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:21
|
e206b5433109d298e53451015465b2bf8f03ef0a
|
https://github.com/vllm-project/vllm/pull/13837
| false | false | false | true |
TEST: test, test, test
|
Copy link Contributor sethkimmel3 commented Feb 25, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . The deepcopy introduced in #11637 adds a lot of overhead when adding a large number of requests to an llm_engine . This adds a more efficient method of copying the XGrammarLogitsProcessor data structure to remove that overhead. cc: @mgoin @aarnphm Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link github-actions bot commented Feb 25, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the structured-output label Feb 25, 2025 aarnphm reviewed Feb 25, 2025 View reviewed changes vllm/model_executor/guided_decoding/xgrammar_decoding.py Outdated Comment on lines 362 to 364 if hasattr(self, 'token_bitmask') and self.token_bitmask is not None: new_processor.token_bitmask = xgr.allocate_token_bitmask( self.batch_size, self.config.vocab_size) Copy link Collaborator aarnphm Feb 25, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment should it be Suggested change if hasattr ( self , 'token_bitmask' ) and self . token_bitmask is not None : new_processor . token_bitmask = xgr . allocate_token_bitmask ( self . batch_size , self . config . vocab_size ) if hasattr ( self , 'token_bitmask' ) and self . token_bitmask is not None : new_processor . token_bitmask = self . token_bitmask Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions aarnphm approved these changes Feb 25, 2025 View reviewed changes Copy link Collaborator aarnphm left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment one tiny comment, if it passes the tests then LGTM. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator aarnphm commented Feb 25, 2025 @sethkimmel3 there are a few pre-commit problem can you fix this? thanks. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sethkimmel3 added 5 commits February 25, 2025 10:43 clone test … 4f8265e Signed-off-by: Seth Kimmel <[email protected]> replace deepcopy … fbe5acf Signed-off-by: Seth Kimmel <[email protected]> ruff and small tweak … bf10cbc Signed-off-by: Seth Kimmel <[email protected]> update … 2c1a699 Signed-off-by: Seth Kimmel <[email protected]> lint … 11b4114 Signed-off-by: Seth Kimmel <[email protected]> sethkimmel3 force-pushed the clone-test branch
from a19541b to 11b4114 Compare February 25, 2025 18:43 Copy link Collaborator aarnphm commented Feb 25, 2025 I cant update the title, but can you make it to [v0][Core] Use shared context to avoid copy overhead for offline engine otherwise I think this should be ready to bring out of draft All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . sethkimmel3 changed the title Replace xgrammar deepcopy [v0][Core] Use shared context to avoid copy overhead for offline engine Feb 25, 2025 sethkimmel3 marked this pull request as ready for review February 25, 2025 18:49 sethkimmel3 requested a review
from mgoin as a code owner February 25, 2025 18:49 Copy link Contributor Author sethkimmel3 commented Feb 25, 2025 Done and done @aarnphm ! All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin changed the title [v0][Core] Use shared context to avoid copy overhead for offline engine [v0][Core] Use xgrammar shared context to avoid copy overhead for offline engine Feb 25, 2025 mgoin approved these changes Feb 25, 2025 View reviewed changes mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 25, 2025 Copy link Collaborator aarnphm commented Feb 25, 2025 Thanks. Once all PR pass we can merge this All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Hide details View details DarkLight1337 merged commit e206b54 into vllm-project : main Feb 26, 2025 56 of 58 checks passed Uh oh! There was an error while loading. Please reload this page . Akshat-Tripathi pushed a commit
to krai/vllm
that referenced
this pull request Mar 3, 2025 [v0][Core] Use xgrammar shared context to avoid copy overhead for off… … 77ca08e …line engine ( vllm-project#13837 )
Signed-off-by: Seth Kimmel <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [v0][Core] Use xgrammar shared context to avoid copy overhead for off… … c2d7cba …line engine ( vllm-project#13837 )
Signed-off-by: Seth Kimmel <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [v0][Core] Use xgrammar shared context to avoid copy overhead for off… … f4c2054 …line engine ( vllm-project#13837 )
Signed-off-by: Seth Kimmel <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:25
|
6a417b8600d4d1e57698a91b71a38446e8fc5c45
|
https://github.com/vllm-project/vllm/pull/13589
| false | false | false | true |
TEST: test, test, CI
|
Copy link Contributor ajayvohra2005 commented Feb 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This fixes a Neuron specific performance issue. Without this fix, Neuron performance degrades quickly when number of concurrent requests >= max_num_seqs . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions fix neuron performance issue 3aaf6a3 Copy link github-actions bot commented Feb 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented Feb 20, 2025 cc @liangfu All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . liangfu approved these changes Feb 20, 2025 View reviewed changes Copy link Contributor liangfu left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment thanks for the fix Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details WoosukKwon merged commit 6a417b8 into vllm-project : main Feb 20, 2025 19 of 20 checks passed Uh oh! There was an error while loading. Please reload this page . Akshat-Tripathi pushed a commit
to krai/vllm
that referenced
this pull request Mar 3, 2025 fix neuron performance issue ( vllm-project#13589 ) 6b81301 lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 fix neuron performance issue ( vllm-project#13589 ) … 353aced Signed-off-by: Louis Ulmer <[email protected]> ckhordiasma mentioned this pull request Apr 17, 2025 [do not merge] pr test for nm changes into 2.20 red-hat-data-services/vllm#107 Closed shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 fix neuron performance issue ( vllm-project#13589 ) 500b058 liangfu mentioned this pull request May 14, 2025 Remove pre-emption logic for Neuron aws-neuron/upstreaming-to-vllm#17 Closed Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:28
|
0d243f2a54fbd1c56da8a571f0899c30b6aba5d9
|
https://github.com/vllm-project/vllm/pull/13577
| false | true | false | true |
PERF: latency | TEST: test, CI, CI
|
Copy link Contributor divakar-amd commented Feb 20, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Found better configs when comparing with rocm fork. The PR serves 2 purposes: Update with better config setting Maintain same configs b/w upstream and rocm fork Offline-latency numbers (sec) Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions update mixtral8x7B specific moe config bs perf … 44dd275 Signed-off-by: Divakar Verma <[email protected]> Copy link github-actions bot commented Feb 20, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . divakar-amd mentioned this pull request Feb 20, 2025 resolve configs diff for mixtral8x7B ROCm/vllm#437 Merged DarkLight1337 approved these changes Feb 20, 2025 View reviewed changes DarkLight1337 enabled auto-merge (squash) February 20, 2025 02:20 github-actions bot added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 20, 2025 Hide details View details DarkLight1337 merged commit 0d243f2 into vllm-project : main Feb 20, 2025 61 checks passed Uh oh! There was an error while loading. Please reload this page . xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Feb 20, 2025 [ROCm][MoE] mi300 mixtral8x7B perf for specific BS ( vllm-project#13577 ) … 1d993c1 Signed-off-by: Divakar Verma <[email protected]> Akshat-Tripathi pushed a commit
to krai/vllm
that referenced
this pull request Mar 3, 2025 [ROCm][MoE] mi300 mixtral8x7B perf for specific BS ( vllm-project#13577 ) … f684038 Signed-off-by: Divakar Verma <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [ROCm][MoE] mi300 mixtral8x7B perf for specific BS ( vllm-project#13577 ) … 2749bea Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [ROCm][MoE] mi300 mixtral8x7B perf for specific BS ( vllm-project#13577 ) … 439c0ce Signed-off-by: Divakar Verma <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:30
|
4c822298981a8f7521492075ff72659985fc4c3f
|
https://github.com/vllm-project/vllm/pull/13365
| false | true | false | true |
PERF: speedup | TEST: test, CI, CI
|
Copy link Collaborator WoosukKwon commented Feb 17, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . This PR optimizes the N-gram matching algorithm by JIT compiling it with Numba. I've observed 20-30x speedup with large batch sizes: For ShareGPT benchmark with 5K requests, the cumulative overhead reduces from 54.3 sec to 1.9 sec, which is ~2.5% of the entire running time. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 🚀 5 njhill, ywang96, LiuXiaoxuanPKU, michaelfeil, and mgoin reacted with rocket emoji All reactions 🚀 5 reactions WoosukKwon added 9 commits February 15, 2025 12:54 [V1] Get input tokens from scheduler … 8406f11 Signed-off-by: Woosuk Kwon <[email protected]> fix … 0399f09 Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-scheduler-input 960964a fix … c54ff6c Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-scheduler-input aa8ae69 comment … c833429 Signed-off-by: Woosuk Kwon <[email protected]> [V1][Spec decode] Move drafter to model runner … b42a16f Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-spec-decode 5f13604 [V1][Spec Decode] Optimize N-gram matching with Numba … 490df6d Signed-off-by: Woosuk Kwon <[email protected]> Copy link github-actions bot commented Feb 17, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added ci/build v1 labels Feb 17, 2025 WoosukKwon added 4 commits February 17, 2025 11:18 Merge branch 'main' into v1-spec-decode 58e0856 Merge branch 'v1-spec-decode' into v1-spec-opt 85afbe6 Merge branch 'main' into v1-spec-opt 81456ab update … c632ad4 Signed-off-by: Woosuk Kwon <[email protected]> WoosukKwon marked this pull request as ready for review February 17, 2025 23:49 WoosukKwon requested review from robertgshaw2-redhat , njhill , ywang96 , comaniac and alexm-redhat as code owners February 17, 2025 23:49 WoosukKwon added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 17, 2025 Copy link Collaborator Author WoosukKwon commented Feb 17, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . cc @LiuXiaoxuanPKU This PR is ready. Could you please take a look? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon added 4 commits February 17, 2025 15:54 minor … 524af01 Signed-off-by: Woosuk Kwon <[email protected]> Pin numba version … ca4458d Signed-off-by: Woosuk Kwon <[email protected]> Merge branch 'main' into v1-spec-opt 11cceb4 Initialize drafter only for last rank … 8de56ec Signed-off-by: Woosuk Kwon <[email protected]> LiuXiaoxuanPKU approved these changes Feb 18, 2025 View reviewed changes Copy link Collaborator LiuXiaoxuanPKU left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM, thanks! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Hide details View details WoosukKwon merged commit 4c82229 into main Feb 18, 2025 57 of 71 checks passed Uh oh! There was an error while loading. Please reload this page . WoosukKwon deleted the v1-spec-opt branch February 18, 2025 21:20 mgoin reviewed Feb 18, 2025 View reviewed changes requirements-common.txt @@ -1,6 +1,7 @@ psutil sentencepiece # Required for LLaMA tokenizer. numpy < 2.0.0 numba == 0.60.0 # v0.61 doesn't support Python 3.9. Required for N-gram speculative decoding. Copy link Member mgoin Feb 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Shouldn't this be in requirements-cuda.txt rather than common? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Collaborator Author WoosukKwon Feb 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Oh I'm ok with either; I just thought it would be eventually used by others as well. Please feel free to submit a PR to move it to requirements-cuda.txt and probably requirements-rocm.txt . Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor michaelfeil commented Feb 19, 2025 Very excited about this! 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator Author WoosukKwon commented Feb 19, 2025 @michaelfeil Thanks! Happy to see you again :) We still have some headroom for performance: #13498 Please let us know if you are interested in working on this. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Feb 20, 2025 [V1][Spec Decode] Optimize N-gram matching with Numba ( vllm-project#1… … 0c8d213 …3365 )
Signed-off-by: Woosuk Kwon <[email protected]> Akshat-Tripathi pushed a commit
to krai/vllm
that referenced
this pull request Mar 3, 2025 [V1][Spec Decode] Optimize N-gram matching with Numba ( vllm-project#1… … 1104f29 …3365 )
Signed-off-by: Woosuk Kwon <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [V1][Spec Decode] Optimize N-gram matching with Numba ( vllm-project#1… … 3b3b1db …3365 )
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [V1][Spec Decode] Optimize N-gram matching with Numba ( vllm-project#1… … 0497603 …3365 )
Signed-off-by: Woosuk Kwon <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:34
|
30172b4947c52890b808c6da3a6c7580f55cbb74
|
https://github.com/vllm-project/vllm/pull/13244
| false | false | false | true |
TEST: test, test, test
|
Copy link Member njhill commented Feb 13, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Move the current SamplingMetadata object to a field in the persistent batch, updated only when the batch changes rather than constructed every step Keep input_batch.req_ids sized to the number of requests in the batch, so that anywhere that iterates over it doesn't need to slice (copy) the list or keep track of the separate request count. It is still updated in-place Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction njhill requested review from WoosukKwon , robertgshaw2-redhat , ywang96 , comaniac and alexm-redhat as code owners February 13, 2025 23:29 Copy link github-actions bot commented Feb 13, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the v1 label Feb 13, 2025 njhill commented Feb 13, 2025 View reviewed changes vllm/v1/worker/gpu_input_batch.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Feb 14, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @njhill . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Feb 14, 2025 [V1] Optimize handling of sampling metadata and req_ids list … 7d6ee8f - Move SamplingMetadata to a field in the persistent batch, updated only when the batch changes rather than constructed every step
- Keep input_batch.req_ids sized to the number of requests in the batch, so that anywhere that iterates over it doesn't need to slice (copy) the list or keep track of the separate request count. It is still updated in-place
Signed-off-by: Nick Hill <[email protected]> njhill force-pushed the sampler-streamline branch
from 2bcf20f to 7d6ee8f Compare February 14, 2025 16:27 mergify bot removed
the needs-rebase label Feb 14, 2025 Copy link Member Author njhill commented Feb 14, 2025 @WoosukKwon this is the first step, I am working on follow-on simplification for the penalty parameters, etc. 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon self-assigned this Feb 14, 2025 njhill added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 14, 2025 Copy link Member Author njhill commented Feb 14, 2025 @WoosukKwon apologies, I am looking into the test failure. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . don't mutate "constant" sampling metadata tensors … 37d1f98 Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Feb 14, 2025 @WoosukKwon the test failure should be fixed now... the shared apply penalties code was doing in-place unsqueezes on the sampling penalty tensors - which I think is a bad thing to do but didn't cause a problem before because we were passing new slices every step. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link mergify bot commented Feb 14, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @njhill . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added
the needs-rebase label Feb 14, 2025 Merge remote-tracking branch 'origin/main' into sampler-streamline … f354b07 # Conflicts:
# vllm/v1/worker/gpu_input_batch.py mergify bot removed
the needs-rebase label Feb 15, 2025 Copy link Collaborator WoosukKwon commented Feb 15, 2025 Hi @njhill , do you mind if we merge #12193 first and review this PR? I'd like to prioritize the spec decode PR as it already got rebased many many times. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member Author njhill commented Feb 15, 2025 @WoosukKwon that's fine with me. ❤️ 1 WoosukKwon reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . njhill added 4 commits February 14, 2025 21:49 simplify sampling metadata … 602d3b6 Signed-off-by: Nick Hill <[email protected]> Merge remote-tracking branch 'refs/remotes/origin/main' into sampler-… … 80eae4e …streamline
Signed-off-by: Nick Hill <[email protected]>
# Conflicts:
# tests/v1/worker/test_gpu_input_batch.py
# vllm/v1/sample/sampler.py group stop_token_ids with min_tokens … 57cd611 Signed-off-by: Nick Hill <[email protected]> test updates … c7e2bfd Signed-off-by: Nick Hill <[email protected]> Copy link mergify bot commented Feb 16, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @njhill . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 5 hidden items Load more… mergify bot removed
the needs-rebase label Feb 18, 2025 Some more small list/tuple optimizations; fix linting … d246ce5 Signed-off-by: Nick Hill <[email protected]> njhill commented Feb 18, 2025 View reviewed changes vllm/v1/request.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/scheduler.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Member Author njhill commented Feb 18, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . @WoosukKwon I have now rebased. #13360 partially overlaps with this (e,g. I simplified some of the min_tokens handling in this one but have refactored completely in the other one based on the new abstraction). But I think it would be fine to get this in first and I can rebase the other one if you're ok with that. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Small adjustment … 5e216c7 Signed-off-by: Nick Hill <[email protected]> njhill commented Feb 18, 2025 View reviewed changes vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill commented Feb 18, 2025 View reviewed changes vllm/v1/worker/gpu_input_batch.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented Feb 18, 2025 @njhill I'm not sure it's worthwhile to change from [] to () . I did a microbenchmark: N = 1024 x = [] # List start = time . perf_counter () for i in range ( N ): x . append ([]) end = time . perf_counter () print ( f"list: { ( end - start ) * 1000 :.3f } ms" ) y = [] # Tuple start = time . perf_counter () for i in range ( N ): y . append (()) end = time . perf_counter () print ( f"tuple: { ( end - start ) * 1000 :.3f } ms" ) I find that adding 1024 (maximum number of requests in the batch) empty lists only takes 80-90 us. While using tuple reduces this time to 30-40 us, I think the 50 us gap (in the worst case) cannot justify the extra complexity here. When the batch size is 32, the gap becomes even smaller (7 us vs 2 us). WDYT? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Fix rejection sampler test … b2a43ba Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Feb 18, 2025 @WoosukKwon I agree it's not worth any extra complexity. Just might as well use () where it doesn't otherwise make any difference to the code. Let me check and revert where such changes were made.. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Collaborator WoosukKwon commented Feb 18, 2025 @njhill I think changing List to Sequence itself is increasing complexity? After that, we need to consider whether it's a tuple or list. I'd prefer to keep using List and [] if the performance is the only concern. 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Revert change related to list vs tuple … 2fbc6e1 Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Feb 18, 2025 @WoosukKwon sure, let me revert those too. I think mostly we don't need to consider the tuple/list difference because these are args or fields that would be considered read-only. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Revert List->Sequence changes … 1b68e03 Signed-off-by: Nick Hill <[email protected]> Copy link Member Author njhill commented Feb 18, 2025 @WoosukKwon I need to fix up some of the gpu_model_runner tests, but I'll wait for your first review to make sure you are good with the changes overall before spending time on that. ❤️ 1 WoosukKwon reacted with heart emoji All reactions ❤️ 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . WoosukKwon reviewed Feb 18, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Amazing. Looks much cleaner! 😄 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions vllm/v1/worker/gpu_model_runner.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/core/scheduler.py Comment on lines +198 to +200 del request.spec_token_ids[num_scheduled_spec_tokens:] scheduled_spec_decode_tokens[request.request_id] = ( request.spec_token_ids [:num_scheduled_spec_tokens] ) request.spec_token_ids) Copy link Collaborator WoosukKwon Feb 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment What is this change for? Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Member Author njhill Feb 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It avoids creating a new list, just trims the existing one down to num_scheduled_spec_tokens , since any later spec token ids are essentially discarded anyhow. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 WoosukKwon reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Collaborator WoosukKwon Feb 18, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Got it! Maybe worth a comment. Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 njhill reacted with thumbs up emoji All reactions 👍 1 reaction vllm/v1/sample/metadata.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/worker/gpu_input_batch.py Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/worker/gpu_input_batch.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/worker/gpu_input_batch.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . vllm/v1/worker/gpu_input_batch.py Outdated Show resolved Hide resolved Uh oh! There was an error while loading. Please reload this page . njhill added 2 commits February 18, 2025 07:51 Address review comments … 28a17ae Signed-off-by: Nick Hill <[email protected]> Fix up gpu_model_runner tests … 9250721 Signed-off-by: Nick Hill <[email protected]> WoosukKwon approved these changes Feb 18, 2025 View reviewed changes Copy link Collaborator WoosukKwon left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment LGTM! Very nice simplification! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Add comment … ce3c3f4 Signed-off-by: Nick Hill <[email protected]> Hide details View details njhill merged commit 30172b4 into vllm-project : main Feb 18, 2025 44 checks passed Uh oh! There was an error while loading. Please reload this page . njhill deleted the sampler-streamline branch February 18, 2025 20:15 xjpang pushed a commit
to xjpang/vllm
that referenced
this pull request Feb 20, 2025 [V1] Optimize handling of sampling metadata and req_ids list ( vllm-pr… … d54a1e9 …oject#13244 )
Signed-off-by: Nick Hill <[email protected]> Akshat-Tripathi pushed a commit
to krai/vllm
that referenced
this pull request Mar 3, 2025 [V1] Optimize handling of sampling metadata and req_ids list ( vllm-pr… … d9b7062 …oject#13244 )
Signed-off-by: Nick Hill <[email protected]> lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [V1] Optimize handling of sampling metadata and req_ids list ( vllm-pr… … be846f4 …oject#13244 )
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [V1] Optimize handling of sampling metadata and req_ids list ( vllm-pr… … ff9b783 …oject#13244 )
Signed-off-by: Nick Hill <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:38
|
5e5c8e091eacc16672a0a8265eb5cb0ece85d24b
|
https://github.com/vllm-project/vllm/pull/13236
| false | true | true | true |
PERF: TTFT, benchmark_serving, benchmark_serving | SERVING: Serving, Frontend, Frontend | TEST: test, test, Test
|
Copy link Member mgoin commented Feb 13, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . For GPTQMarlin and AWQMarlin it seems the moe_wna16 kernel is faster for experts with dozens of experts, based on testing Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4 (60 experts), TechxGenus/DeepSeek-Coder-V2-Lite-Instruct-AWQ (64 experts), and cognitivecomputations/DeepSeek-R1-AWQ (256 experts) cc @ElizaWszola @dsikka Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 Godofnothing reacted with thumbs up emoji All reactions 👍 1 reaction Use moe_wna16 kernel by default for MoEs with many experts … bb27d51 Signed-off-by: mgoin <[email protected]> mgoin requested review from robertgshaw2-redhat and tlrmchlsmth as code owners February 13, 2025 20:03 Copy link github-actions bot commented Feb 13, 2025 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Fixes … 4ac97e1 Signed-off-by: mgoin <[email protected]> Copy link Member Author mgoin commented Feb 13, 2025 @jinzhen-lin please see this PR. After this, I think we could remove moe_wna16 as a larger quant method and just use it as a kernel. What do you think? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Fix type issue … 3e07d17 Signed-off-by: mgoin <[email protected]> Copy link Contributor dsikka commented Feb 13, 2025 • edited Loading Uh oh! There was an error while loading. Please reload this page . Thanks for taking this on. Please run and/or update the weight_loading_large tests . I believe all the tests were skipped even when enabled when I last ran them last week so just something to potentially look out for. 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin added
the ready ONLY add when PR is ready to merge/full CI is needed label Feb 13, 2025 Fix weight-loading A100 test … c13deb5 Signed-off-by: mgoin <[email protected]> mgoin requested a review
from youkaichao as a code owner February 14, 2025 16:05 Copy link Member Author mgoin commented Feb 14, 2025 I fixed and ran the "Weight Loading Multiple GPU Test - Large Models", however it is failing due to unrelated compressedtensors dtype support issues. I think I can fix this by expanding the moe_wna16 method to compressedtensorsmoe, but will do in a followup All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . tlrmchlsmth approved these changes Feb 14, 2025 View reviewed changes Copy link Collaborator tlrmchlsmth left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Good to land. There is some circular import "weirdness" but it can wait for a future refactor along the lines of this RFC #8913 Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction mgoin added
the force-merge label Feb 14, 2025 dsikka reviewed Feb 14, 2025 View reviewed changes tests/weight_loading/test_weight_loading.py @@ -12,7 +12,7 @@ "robertgshaw2/zephyr-7b-beta-channelwise-gptq") REVISION = os.environ.get("REVISION", "main") QUANTIZATION = os.environ.get("QUANTIZATION", "gptq_marlin") MIN_CAPABILITY = os.environ.get("MIN_CAPABILITY", " 89 ") MIN_CAPABILITY = os.environ.get("MIN_CAPABILITY", " 80 ") Copy link Contributor dsikka Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment ah good catch Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions dsikka reviewed Feb 14, 2025 View reviewed changes vllm/model_executor/layers/quantization/gptq_marlin.py def __init__(self, weight_bits: int, group_size: int, desc_act: bool, is_sym: bool, lm_head_quantized: bool, dynamic: Dict[str, Dict[str, Union[int, bool]]], full_config: Dict[str, Any]) -> None: Copy link Contributor dsikka Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment What is full_config? Can we add a comment Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions Copy link Contributor dsikka Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Oh just the config dict, I see Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 mgoin reacted with thumbs up emoji All reactions 👍 1 reaction Copy link Member Author mgoin Feb 14, 2025 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment It is just the original config saved from from_config so we can forward to MoeWNA16Config Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . 👍 1 dsikka reacted with thumbs up emoji All reactions 👍 1 reaction mgoin added
the quantization label Feb 14, 2025 mgoin changed the title Use moe_wna16 kernel by default for MoEs with many experts [Quant][Perf] Use moe_wna16 kernel by default for MoEs with many experts Feb 14, 2025 Hide details View details simon-mo merged commit 5e5c8e0 into vllm-project : main Feb 14, 2025 35 of 37 checks passed Uh oh! There was an error while loading. Please reload this page . mgoin mentioned this pull request Feb 20, 2025 [Feature]: Add moe_wna16 kernel as a backend for CompressedTensorsWNA16MoEMethod #13575 Closed 1 task hongxiayang pushed a commit
to ROCm/vllm
that referenced
this pull request Feb 25, 2025 [MFM-2025-02-21] Merge main to llama fp8, DeepSeekV3 and PTPC-FP8 ( #445 ) … d7fefdf * [ROCM][AMD][TRITON] Halving warps number for fw_prefill to reduce spilling ( vllm-project#12713 )
Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
* Refactor `Linear` handling in `TransformersModel` ( vllm-project#12727 )
Signed-off-by: Harry Mellor <[email protected]>
* [VLM] Add MLA with pure RoPE support for deepseek-vl2 models ( vllm-project#12729 )
* [Misc] Bump the compressed-tensors version ( vllm-project#12736 )
* [Model][Quant] Fix GLM, Fix fused module mappings for quantization ( vllm-project#12634 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [Doc] Update PR Reminder with link to Developer Slack ( vllm-project#12748 )
* [Bugfix] Fix OpenVINO model runner ( vllm-project#12750 )
* [V1][Misc] Shorten `FinishReason` enum and use constant strings ( vllm-project#12760 )
* [Doc] Remove performance warning for auto_awq.md ( vllm-project#12743 )
* [Bugfix] Fix 'ModuleNotFoundError: No module named 'intel_extension_for_pytorch'' for --tensor-parallel-size more than 1 ( vllm-project#12546 )
* [core][distributed] exact ray placement control ( vllm-project#12732 )
Signed-off-by: youkaichao <[email protected]>
* The code assumes WARP_SIZE to be equal to 32, which is not the case on ROCm ( #406 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* Merging PR vllm-project#12536 Merged via CLI script
* [Hardware][Intel-Gaudi] Enable FusedSDPA support for Intel Gaudi (HPU)
* Add: Support for Sparse24Bitmask Compressed Models
* [VLM] Use shared field to pass token ids to model
* [Docs] Drop duplicate [source] links
* [VLM] Qwen2.5-VL
* [VLM] Update compatibility with transformers 4.49
* [ROCm][Kernel] Using the correct warp_size value
* [Bugfix] Better FP8 supported defaults
* [Misc][Easy] Remove the space from the file name
* [Model] LoRA Support for Ultravox model ( vllm-project#11253 )
* [Bugfix] Fix the test_ultravox.py's license ( vllm-project#12806 )
Signed-off-by: Lu Fang <[email protected]>
* Improve `TransformersModel` UX ( vllm-project#12785 )
* [Misc] Remove duplicated DeepSeek V2/V3 model definition ( vllm-project#12793 )
* [Misc] Improve error message for incorrect pynvml ( vllm-project#12809 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Update w2 scale loading for GPTQMarlinMoE ( vllm-project#12757 )
* [Docs] Add Google Cloud Slides ( vllm-project#12814 )
* [Attention] Use FA3 for MLA on Hopper ( vllm-project#12807 )
Signed-off-by: Lucas Wilkinson <[email protected]>
* [misc] Reduce number of config file requests to HuggingFace ( vllm-project#12797 )
Signed-off-by: EC2 Default User <[email protected]>
Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>
* Update README.md 20250205_aiter ( #407 )
* Update README.md 20250205_aiter
* whitespace
* adding VLLM_USE_AITER=0 advice
* [Misc] Remove unnecessary decode call ( vllm-project#12833 )
* [Kernel] Make rotary_embedding ops more flexible with input shape ( vllm-project#12777 )
* [torch.compile] PyTorch 2.6 and nightly compatibility ( vllm-project#12393 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] double quote cmake package in build.inc.md ( vllm-project#12840 )
* [Bugfix] Fix unsupported FA version check for Turing GPU ( vllm-project#12828 )
* [V1] LoRA Support ( vllm-project#10957 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* Add Bamba Model ( vllm-project#10909 )
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [MISC] Check space in the file names in the pre commit checks ( vllm-project#12804 )
Signed-off-by: Lu Fang <[email protected]>
* [misc] Revert # 12833 ( vllm-project#12857 )
Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>
* [Bugfix] FA2 illegal memory access ( vllm-project#12848 )
* Make vllm compatible with verl ( vllm-project#12824 )
Co-authored-by: zhangshulai <[email protected]>
* [Bugfix] Missing quant_config in deepseek embedding layer ( vllm-project#12836 )
* Prevent unecessary requests to huggingface hub ( vllm-project#12837 )
* [MISC][EASY] Break check file names into entry and args in the pre-commit hooks ( vllm-project#12880 )
Signed-off-by: Lu Fang <[email protected]>
* [Misc] Remove unnecessary detokenization in multimodal processing ( vllm-project#12868 )
* PR vllm-project#12718 ( vllm-project#12718 )
* [V1] Logprobs and prompt logprobs support ( vllm-project#9880 )
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [ROCm] [Feature] [Doc] [Dockerfile] [BugFix] Support Per-Token-Activation Per-Channel-Weight FP8 Quantization Inferencing ( vllm-project#12501 )
* fix rocm get_device name for moe configs ( #359 )
* fix rocm get_device name
use 'market_name'
hard-code names for mi308 & mi300
* use gfx and num_CU for device name
* using market_name
* rename MI325_OAM to MI325X
* rm (duplicate) MI300X_OAM
* rename mi308
* [V1] LM Eval With Streaming Integration Tests ( vllm-project#11590 )
* [Bugfix] Fix disagg hang caused by the prefill and decode communication issues ( vllm-project#12723 )
Signed-off-by: Lu Fang <[email protected]>
* [V1][Minor] Remove outdated comment ( vllm-project#12928 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1] Move KV block hashes from Request to KVCacheManager ( vllm-project#12922 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix] Fix Qwen2_5_VLForConditionalGeneration packed_modules_mapping ( vllm-project#12905 )
* [Misc] Fix typo in the example file ( vllm-project#12896 )
Signed-off-by: Zhao Ke <[email protected]>
* [Bugfix] Fix multi-round chat error when mistral tokenizer is used ( vllm-project#12859 )
Signed-off-by: Zifei Tong <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
* [bugfix] respect distributed_executor_backend in world_size=1 ( vllm-project#12934 )
Signed-off-by: youkaichao <[email protected]>
* [Misc] Add offline test for disaggregated prefill ( vllm-project#12418 )
* [V1][Minor] Move cascade attn logic outside _prepare_inputs ( vllm-project#12943 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Build] Make pypi install work on CPU platform ( vllm-project#12874 )
* [Hardware][Intel-Gaudi] Enable long-contexts + LoRA support for Intel Gaudi ( vllm-project#12812 )
Signed-off-by: Sanju C Sudhakaran <[email protected]>
* [misc] Add LoRA to benchmark_serving ( vllm-project#12898 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Misc] Log time consumption on weight downloading ( vllm-project#12926 )
* [CI] Resolve transformers-neuronx version conflict ( vllm-project#12925 )
* [Doc] Correct HF repository for TeleChat2 models ( vllm-project#12949 )
* [Misc] Add qwen2.5-vl BNB support ( vllm-project#12944 )
* [CI/Build] Auto-fix Markdown files ( vllm-project#12941 )
* [Bugfix] Remove unused seq_group_metadata_list from ModelInputForGPU ( vllm-project#12935 )
Signed-off-by: Shangming Cai <[email protected]>
* [bugfix] fix early import of flash attention ( vllm-project#12959 )
Signed-off-by: youkaichao <[email protected]>
* [VLM] Merged multi-modal processor for GLM4V ( vllm-project#12449 )
Signed-off-by: Jee Jee Li <[email protected]>
* [V1][Minor] Remove outdated comment ( vllm-project#12968 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [RFC] [Mistral] FP8 format ( vllm-project#10130 )
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
* [V1] Cache `uses_mrope` in GPUModelRunner ( vllm-project#12969 )
* [core] port pynvml into vllm codebase ( vllm-project#12963 )
Signed-off-by: youkaichao <[email protected]>
* [MISC] Always import version library first in the vllm package ( vllm-project#12979 )
Signed-off-by: Lu Fang <[email protected]>
* [core] improve error handling when wake up from sleep mode ( vllm-project#12981 )
Signed-off-by: youkaichao <[email protected]>
* [core][rlhf] add colocate example for RLHF ( vllm-project#12984 )
Signed-off-by: youkaichao <[email protected]>
* [V1] Use msgpack for core request serialization ( vllm-project#12918 )
Signed-off-by: Nick Hill <[email protected]>
* Check if selected backend is None in get_attn_backend_cls() ( vllm-project#12975 )
Signed-off-by: Yuan Tang <[email protected]>
* [core] fix sleep mode and pytorch checkpoint compatibility ( vllm-project#13001 )
Signed-off-by: youkaichao <[email protected]>
* [Doc] Add link to tool_choice tracking issue in tool_calling.md ( vllm-project#13003 )
Signed-off-by: Yuan Tang <[email protected]>
* [misc] Add retries with exponential backoff for HF file existence check ( vllm-project#13008 )
* [Bugfix] Clean up and fix multi-modal processors ( vllm-project#13012 )
Signed-off-by: DarkLight1337 <[email protected]>
* Fix seed parameter behavior in vLLM ( vllm-project#13007 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* Fixing the output formatting ( #414 )
* [Model] Ultravox Model: Support v0.5 Release ( vllm-project#12912 )
Signed-off-by: Farzad Abdolhosseini <[email protected]>
* [misc] Fix setup.py condition to avoid AMD from being mistaken with CPU ( vllm-project#13022 )
Signed-off-by: kevin <[email protected]>
* [V1][Minor] Move scheduler outputs to a separate file ( vllm-project#13062 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Docs] Annouce Meta Meetup ( vllm-project#13065 )
Signed-off-by: simon-mo <[email protected]>
* [Bugfix] Support missing tool parameters in mistral tokenizer ( vllm-project#12884 )
Signed-off-by: Florian Greinacher <[email protected]>
* [Benchmark] Add BurstGPT to benchmark_serving ( vllm-project#13063 )
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Core] Don't do platform detection at import time ( vllm-project#12933 )
Signed-off-by: Russell Bryant <[email protected]>
* [Misc] LoRA - Refactor Punica ops tests ( vllm-project#12970 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix]: Reasoning output bug according to the chat template change ( vllm-project#13025 )
Signed-off-by: Ce Gao <[email protected]>
* [V1][Metrics] Add GPU prefix cache hit rate % gauge ( vllm-project#12592 )
* [executor] init `local_rank` as device index ( vllm-project#13027 )
Signed-off-by: Mengqing Cao <[email protected]>
* [ROCm] Using a more precise memory profiling ( vllm-project#12624 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [Build] Fix cuda link target of cumem_allocator in CPU env ( vllm-project#12863 )
Signed-off-by: YuhongGuo <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
* [Platform] add pre_register_and_update function ( vllm-project#12432 )
Signed-off-by: wangxiyuan <[email protected]>
* [Bugfix] fix flaky test ( vllm-project#13089 )
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
* [V1][Metrics] Add several request timing histograms ( vllm-project#12644 )
Signed-off-by: Mark McLoughlin <[email protected]>
* Set `torch_dtype` in `TransformersModel` ( vllm-project#13088 )
Signed-off-by: Harry Mellor <[email protected]>
* [Misc] Fix typo at comments at metrics.py ( vllm-project#13024 )
* [Bugfix] Do not use resource module on Windows ( vllm-project#12858 ) ( vllm-project#13029 )
* [BugFix] Pop instead of del CUDA_VISIBLE_DEVICES ( vllm-project#12962 )
Signed-off-by: Hollow Man <[email protected]>
* Fix initializing GGUF weights for ColumnParallelLinear when using tensor parallel > 1 ( vllm-project#13023 )
* Add tuned moe config for qwen1.5_moe_A2.7B ( #398 )
* Add tuned moe config for qwen1.5_moe_A2.7B
* Add more sweep parameters on qwen2_moe
* Add tp = 1,2,4,8 after applying PR12838
* Rename config name by deleting "_OAM"
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
* [CI/Build][Bugfix] Fix CPU backend default threads num ( vllm-project#13077 )
* Removing non-existent parameter
* [Doc] Improve OpenVINO installation doc ( vllm-project#13102 )
Signed-off-by: Harry Mellor <[email protected]>
* [Bugfix] Guided decoding falls back to outlines when fails to import xgrammar ( vllm-project#12976 )
Signed-off-by: Yuan Tang <[email protected]>
* [Misc] Move pre-commit suggestion back to the end ( vllm-project#13114 )
Signed-off-by: Russell Bryant <[email protected]>
* [RFC][vllm-API] Support tokenizer registry for customized tokenizer in vLLM ( vllm-project#12518 )
Signed-off-by: Keyun Tong <[email protected]>
* [Model] IBM/NASA Prithvi Geospatial model ( vllm-project#12830 )
* [ci] Add more source file dependencies for some tests ( vllm-project#13123 )
Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>
* [Neuron][Kernel] Support Longer Sequences in NKI-based Flash PagedAttention and Improve Efficiency ( vllm-project#12921 )
Signed-off-by: Lingfan Yu <[email protected]>
* Bump helm/kind-action from 1.10.0 to 1.12.0 ( vllm-project#11612 )
* Bump actions/stale from 9.0.0 to 9.1.0 ( vllm-project#12462 )
* Bump helm/chart-testing-action from 2.6.1 to 2.7.0 ( vllm-project#12463 )
* Bump actions/setup-python from 5.3.0 to 5.4.0 ( vllm-project#12672 )
* Further reduce the HTTP calls to huggingface.co ( vllm-project#13107 )
* [Misc] AMD Build Improvements ( vllm-project#12923 )
* [Bug] [V1] Try fetching stop_reason from EngineOutput before checking the request ( vllm-project#13108 )
* [Bugfix] Fix num video tokens calculation for Qwen2-VL ( vllm-project#13148 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Frontend] Generate valid tool call IDs when using `tokenizer-mode=mistral` ( vllm-project#12332 )
* [Misc] Delete unused LoRA modules ( vllm-project#13151 )
* Introduce VLLM_CUDART_SO_PATH to allow users specify the .so path ( vllm-project#12998 )
Signed-off-by: Lu Fang <[email protected]>
* [CI/Build] Use mypy matcher for pre-commit CI job ( vllm-project#13162 )
Signed-off-by: Russell Bryant <[email protected]>
* Update Benchmark Profiling Scripts ( #417 )
* Update profiling benchmarks
* Fix linter errors
---------
Co-authored-by: AdrianAbeyta <[email protected]>
* [CORE] [QUANT] Support for GPTQModel's `dynamic` quantization per module override/control ( vllm-project#7086 )
* [Bugfix] Allow fallback to AWQ from AWQMarlin at per-layer granularity ( vllm-project#13119 )
* DS V2V3 fix for same file
* Lint
* updating manfiest ( #416 )
* [CI] Fix failing FP8 cpu offload test ( vllm-project#13170 )
Signed-off-by: mgoin <[email protected]>
* Aiter base ( #419 )
* Using upstream FA repo. Building aiter in the base docker image
* Renaming the file to match upstream naming
* [V1][Bugfix] Copy encoder input ids to fix set iteration issue during VLM abort ( vllm-project#13173 )
Signed-off-by: andoorve <[email protected]>
* [CI/Build] Ignore ruff warning up007 ( vllm-project#13182 )
Signed-off-by: Russell Bryant <[email protected]>
* [perf-benchmark] cleanup unused Docker images and volumes in H100 benchmark instance ( vllm-project#12706 )
* [NVIDIA] Support nvfp4 quantization ( vllm-project#12784 )
* [Bugfix][Example] Fix GCed profiling server for TPU ( vllm-project#12792 )
Signed-off-by: mgoin <[email protected]>
* [VLM] Implement merged multimodal processor for Mllama ( vllm-project#11427 )
* Simplify logic of locating CUDART so file path ( vllm-project#13203 )
Signed-off-by: Lu Fang <[email protected]>
* [Build] Automatically use the wheel of the base commit with Python-only build ( vllm-project#13178 )
* [Bugfix] deepseek_r1_reasoning_parser put reason content in wrong field in certain edge case ( vllm-project#13097 )
* [Frontend] Move CLI code into vllm.cmd package ( vllm-project#12971 )
* Allow Unsloth Dynamic 4bit BnB quants to work ( vllm-project#12974 )
* [CI/Build] Allow ruff to auto-fix some issues ( vllm-project#13180 )
Signed-off-by: Russell Bryant <[email protected]>
* [V1][core] Implement pipeline parallel on Ray ( vllm-project#12996 )
* [VLM] Remove input processor from clip and siglip ( vllm-project#13165 )
* [Frontend] Pass pre-created socket to uvicorn ( vllm-project#13113 )
* [V1] Clarify input processing and multimodal feature caching logic ( vllm-project#13211 )
* [VLM] Merged multi-modal processor for Molmo ( vllm-project#12966 )
* [V1][Core] Add worker_base for v1 worker ( vllm-project#12816 )
Signed-off-by: Aoyu <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Misc] Qwen2.5-VL Optimization ( vllm-project#13155 )
* [VLM] Separate text-only and vision variants of the same model architecture ( vllm-project#13157 )
* [Bugfix] Missing Content Type returns 500 Internal Server Error ( vllm-project#13193 )
* [Frontend] Add `/v1/audio/transcriptions` OpenAI API endpoint ( vllm-project#12909 )
* Initial attempt to adjust codeowners to the ROCm fork ( #420 )
* Applying weight padding to deepseek ( #421 )
* Add label if pre-commit passes ( vllm-project#12527 )
Signed-off-by: Harry Mellor <[email protected]>
* [Model] DeepSeek Tunings ( #423 )
* fused_moe config for DSv3 on MI300X updated
* Add tuning script and post processing script
Signed-off-by: Randall Smith <[email protected]>
* Add modification to fp8_utils for tuning
Signed-off-by: Randall Smith <[email protected]>
* update tuning script and add the configs
Signed-off-by: Randall Smith <[email protected]>
* slightly better tunings
Signed-off-by: Randall Smith <[email protected]>
* benchmark_moe.py is updated to generate more accurate MoE configs and a specific MoE config for DSv3 is added
* Bug in sgl_moe_align_block_size() is fixed by Greg
* Generate fp8_w8a8 config for MI300XHF
* tunings that don't give garbage output
Signed-off-by: Randall Smith <[email protected]>
* More accurate tunings
Signed-off-by: Randall Smith <[email protected]>
* More accurate tunings and reject inaccurate configs
Signed-off-by: Randall Smith <[email protected]>
* add new tunings
Signed-off-by: Randall Smith <[email protected]>
* rename tuning script and add benchmark script to use for optimizing blockwise quant
Signed-off-by: Randall Smith <[email protected]>
* remove white space from file names
Signed-off-by: Randall Smith <[email protected]>
* remove white space from file names
Signed-off-by: Randall Smith <[email protected]>
* Remove some unnecessary changes
Signed-off-by: Randall Smith <[email protected]>
* don't use space in file names
Signed-off-by: Randall Smith <[email protected]>
* remove XHF tunings
Signed-off-by: Randall Smith <[email protected]>
* remove OAM from file name
Signed-off-by: Randall Smith <[email protected]>
* rmeove OAM from file names
Signed-off-by: Randall Smith <[email protected]>
* yapf
Signed-off-by: Randall Smith <[email protected]>
* update config name
Signed-off-by: Randall Smith <[email protected]>
* remove benchmark_moe.py changes
Signed-off-by: Randall Smith <[email protected]>
* remove is_contiguous
Signed-off-by: Randall Smith <[email protected]>
* use more recent fp8_utils.py
Signed-off-by: Randall Smith <[email protected]>
* remove is_contiguous
Signed-off-by: Randall Smith <[email protected]>
---------
Signed-off-by: Randall Smith <[email protected]>
Co-authored-by: qli88 <[email protected]>
* Optimize moe_align_block_size for deepseek_v3 ( vllm-project#12850 )
Signed-off-by: mgoin <[email protected]>
* [Kernel][Bugfix] Refactor and Fix CUTLASS 2:4 Sparse Kernels ( vllm-project#13198 )
Signed-off-by: Tyler Michael Smith <[email protected]>
* Revert "Add label if pre-commit passes" ( vllm-project#13242 )
* [ROCm] Avoid using the default stream on ROCm ( vllm-project#13238 )
Signed-off-by: Gregory Shtrasberg <[email protected]>
* [Kernel] Fix awq error when n is not divisable by 128 ( vllm-project#13227 )
* [V1] Consolidate MM cache size to vllm.envs ( vllm-project#13239 )
* [Bugfix/CI] Turn test_compressed_tensors_2of4_sparse back on ( vllm-project#13250 )
* [Bugfix][CI] Inherit codespell settings from pyproject.toml in the pre-commit-config ( vllm-project#13237 )
* [Bugfix] Offline example of disaggregated prefill ( vllm-project#13214 )
* [Misc] Remove redundant statements in scheduler.py ( vllm-project#13229 )
* Consolidate Llama model usage in tests ( vllm-project#13094 )
* Expand MLA to support most types of quantization ( vllm-project#13181 )
* [V1] LoRA - Enable Serving Usecase ( vllm-project#12883 )
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [ROCm][V1] Add intial ROCm support to V1 ( vllm-project#12790 )
* [Bugfix][V1] GPUModelRunner._update_states should return True when there is a finished request in batch ( vllm-project#13126 )
* [WIP] TPU V1 Support Refactored ( vllm-project#13049 )
* [Frontend] Optionally remove memory buffer used for uploading to URLs in run_batch ( vllm-project#12927 )
Signed-off-by: Pooya Davoodi <[email protected]>
* [Bugfix] Fix missing parentheses ( vllm-project#13263 )
* [Misc] Log time consumption of sleep and wake-up ( vllm-project#13115 )
Signed-off-by: Jun Duan <[email protected]>
* [VLM] Keep track of whether prompt replacements have been applied ( vllm-project#13215 )
* [V1] Simplify GPUModelRunner._update_states check ( vllm-project#13265 )
* Support logit_bias in v1 Sampler ( vllm-project#13079 )
* [Core] choice-based structured output with xgrammar ( vllm-project#12632 )
* [Hardware][Gaudi][Bugfix] Fix error for guided decoding ( vllm-project#12317 )
* Removing bad config ( #425 )
* The order in the file is important. One needs to be explicitly be added to each following path for their ownership to apply ( #427 )
* [Quant][Perf] Use moe_wna16 kernel by default for MoEs with many experts ( vllm-project#13236 )
Signed-off-by: mgoin <[email protected]>
* [Core] Reduce TTFT with concurrent partial prefills ( vllm-project#10235 )
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [V1][Core] min_p sampling support ( vllm-project#13191 )
Signed-off-by: Aoyu <[email protected]>
Co-authored-by: Aoyu <[email protected]>
* [V1][CI] Fix failed v1-test because of min_p ( vllm-project#13316 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1][Sampler] Don't apply temp for greedy-only ( vllm-project#13311 )
Signed-off-by: Nick Hill <[email protected]>
* [V1][PP] Fix memory profiling in PP ( vllm-project#13315 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix][AMD] Update torch_bindings so that scaled_fp4_quant isn't build on ROCm ( vllm-project#13235 )
* [Bugfix][Docs] Fix offline Whisper ( vllm-project#13274 )
* [Bugfix] Massage MLA's usage of flash attn for RoCM ( vllm-project#13310 )
* [BugFix] Don't scan entire cache dir when loading model ( vllm-project#13302 )
* [Bugfix]Fix search start_index of stop_checker ( vllm-project#13280 )
* [Bugfix] Fix qwen2.5-vl image processor ( vllm-project#13286 )
* [V1][Metrics] Add iteration_tokens_total histogram from V0 ( vllm-project#13288 )
* [AMD] [Model] DeepSeek tunings ( vllm-project#13199 )
* [V1][PP] Run engine busy loop with batch queue ( vllm-project#13064 )
* [ci/build] update flashinfer ( vllm-project#13323 )
* [Doc] [2/N] Add Fuyu E2E example for multimodal processor ( vllm-project#13331 )
* [V1][Spec Decode] Ngram Spec Decode ( vllm-project#12193 )
Signed-off-by: LiuXiaoxuanPKU <[email protected]>
* [Quant] Add `SupportsQuant` to phi3 and clip ( vllm-project#13104 )
* [Bugfix] Pin xgrammar to 0.1.11 ( vllm-project#13338 )
* avoid calling hf_list_repo_files for local model
Signed-off-by: isotr0py <[email protected]>
* annotation
Signed-off-by: isotr0py <[email protected]>
* [BugFix] Enhance test_pos_encoding to support execution on multi-devices ( vllm-project#13187 )
Signed-off-by: wchen61 <[email protected]>
* [V1] Update doc and examples for H2O-VL ( vllm-project#13349 )
Signed-off-by: Roger Wang <[email protected]>
* [ci] skip failed tests for flashinfer ( vllm-project#13352 )
Signed-off-by: youkaichao <[email protected]>
* [platform] add base class for communicators ( vllm-project#13208 )
Signed-off-by: youkaichao <[email protected]>
* [Bugfix] Fix 2 Node and Spec Decode tests ( vllm-project#13341 )
Signed-off-by: DarkLight1337 <[email protected]>
* [Docs] Change myenv to vllm. Update python_env_setup.inc.md ( vllm-project#13325 )
* [V1][BugFix] Add __init__.py to v1/spec_decode/ ( vllm-project#13359 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1][PP] Cache Intermediate Tensors ( vllm-project#13353 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [Bugfix][Platform][CPU] Fix cuda platform detection on CPU backend edge case ( vllm-project#13358 )
Signed-off-by: Isotr0py <[email protected]>
* [V1][BugFix] Clean up rejection sampler & Fix warning msg ( vllm-project#13362 )
Signed-off-by: Woosuk Kwon <[email protected]>
* [V1][Misc] Avoid unnecessary log output ( vllm-project#13289 )
* [Feature][Spec Decode] Simplify the use of Eagle Spec Decode ( vllm-project#12304 )
Signed-off-by: Shangming Cai <[email protected]>
* Fix spelling error in index.md ( vllm-project#13369 )
* Run v1 benchmark and integrate with PyTorch OSS benchmark database ( vllm-project#13068 )
Signed-off-by: Huy Do <[email protected]>
* [MISC] tiny fixes ( vllm-project#13378 )
* [VLM] Check required fields before initializing field config in `DictEmbeddingItems` ( vllm-project#13380 )
* [Model] Support Mamba2 (Codestral Mamba) ( vllm-project#9292 )
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
* [Bugfix] fix xpu communicator ( vllm-project#13368 )
Signed-off-by: yan ma <[email protected]>
* [Bugfix] Fix VLLM_USE_MODELSCOPE issue ( vllm-project#13384 )
* Updating PR template to point people to the upstream repo. Updating codeowners ( #431 )
* Enabling the ROCm-vLLM CI on MI250 machines ( #432 )
* Enabling ROCm CI on MI250 machines:
- correct build target
- correct queue
Signed-off-by: Alexei V. Ivanov <[email protected]>
---------
Signed-off-by: Alexei V. Ivanov <[email protected]>
* Optimization for quantized gemm skinny sizes ( #411 )
* Optimization for quantized gemm skinny sizes
* lint fix
* Add support for bf16/fp16
* code cleanup
* code cleanup
* lint fix2
* cleanup
* Moved the logic into tuned gemm to preserve API compatibility
---------
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
* Restricting FP8 wvSplitk to MI300x ( #439 )
* Remove mi300a ( #440 )
* Removing gfx940 and gfx941 targets. These have been deprecated in favor of gfx942 for MI300X
Signed-off-by: Gregory Shtrasberg <[email protected]>
* Remove from custom kernels as well
---------
Signed-off-by: Gregory Shtrasberg <[email protected]>
* resolve diff for mixtral8x7B configs ( #437 )
Signed-off-by: Divakar Verma <[email protected]>
* Torch version bump to fix tunable ops ( #442 )
* Advance torch commit to be past pytorch/pytorch#144942 to fix tunable ops
* Make sure to use the submodule commit compatible with the main aiter commit
* bugfix: remove unused argument passed to the forward pass of ReplicatedLinear layer
Signed-off-by: vllmellm <[email protected]>
---------
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: EC2 Default User <[email protected]>
Signed-off-by: <>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Zhao Ke <[email protected]>
Signed-off-by: Zifei Tong <[email protected]>
Signed-off-by: Sanju C Sudhakaran <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Florian Greinacher <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Ce Gao <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: YuhongGuo <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Hollow Man <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Lingfan Yu <[email protected]>
Signed-off-by: andoorve <[email protected]>
Signed-off-by: Aoyu <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Jun Duan <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: LiuXiaoxuanPKU <[email protected]>
Signed-off-by: isotr0py <[email protected]>
Signed-off-by: wchen61 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Akash kaothalkar <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Sanju C Sudhakaran <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Sumit Vij <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: EC2 Default User <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: ZSL98 <[email protected]>
Co-authored-by: zhangshulai <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Ke Zhao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Shaoting <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Jun Duan <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Farzad Abdolhosseini <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Florian Greinacher <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Yuhong Guo <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Jewon Lee <[email protected]>
Co-authored-by: MoonRide303 <[email protected]>
Co-authored-by: ℍ𝕠𝕝𝕝𝕠𝕨 𝕄𝕒𝕟 <[email protected]>
Co-authored-by: sky0530 <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: Christian Pinto <[email protected]>
Co-authored-by: Lingfan Yu <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Shiyan Deng <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Adrian Abeyta <[email protected]>
Co-authored-by: AdrianAbeyta <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: Yida Wu <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Kaixi Hou <[email protected]>
Co-authored-by: LikeSundayLikeRain <[email protected]>
Co-authored-by: Daniel Han <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: 燃 <[email protected]>
Co-authored-by: Vaibhav Jain <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Wang Ran (汪然) <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Kero Liang <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Xu Song <[email protected]>
Co-authored-by: Yu-Zhou <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: isotr0py <[email protected]>
Co-authored-by: wchen61 <[email protected]>
Co-authored-by: 凌 <[email protected]>
Co-authored-by: yankooo <[email protected]>
Co-authored-by: Huy Do <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: r.4ntix <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Hashem Hashemi <[email protected]>
Co-authored-by: vllmellm <[email protected]> mgoin mentioned this pull request Apr 5, 2025 [Kernel] Use moe_wna16 kernel for compressed tensors wna16 moe models #16038 Merged lulmer pushed a commit
to lulmer/vllm
that referenced
this pull request Apr 7, 2025 [Quant][Perf] Use moe_wna16 kernel by default for MoEs with many expe… … ce61da9 …rts ( vllm-project#13236 )
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 [Quant][Perf] Use moe_wna16 kernel by default for MoEs with many expe… … 4abde6f …rts ( vllm-project#13236 )
Signed-off-by: mgoin <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:42
|
b9986454fe8ba80e2a109d069397b6b59aae658b
|
https://github.com/vllm-project/vllm/pull/12570
| false | true | true | true |
PERF: optimization, optimization, profile | SERVING: frontend | TEST: test, CI, CI
|
Copy link Contributor srikanthsrnvs commented Jan 30, 2025 • edited by github-actions bot Loading Uh oh! There was an error while loading. Please reload this page . Fix to AWQ quant loading of the new R1 model The new optimized MoE kernels for a large number of experts moe_wn16 uses AWQ quant which requires the attention layers to be in 16bit The current merge has broken this, and the get_quant_method must return None for it to work correctly again Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions srikanthsrnvs requested review from mgoin , robertgshaw2-redhat and tlrmchlsmth as code owners January 30, 2025 04:43 Copy link github-actions bot commented Jan 30, 2025 👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these: Add ready label to the PR Enable auto-merge. 🚀 All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mgoin approved these changes Jan 31, 2025 View reviewed changes Copy link Member mgoin left a comment There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Thank you, makes sense! Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . All reactions mgoin added quantization ready ONLY add when PR is ready to merge/full CI is needed labels Jan 31, 2025 srikanthsrnvs and others added 23 commits February 3, 2025 03:14 Fix for attention layers to remain unquantized during moe_wn16 quant … … 483b60c …method
Signed-off-by: Srikanth Srinivas <[email protected]> Set ?device={device} when changing tab in installation guides ( vllm… … 915fdce …-project#12560 )
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Misc] fix typo: add missing space in lora adapter error message ( vll… … d689505 …m-project#12564 )
Signed-off-by: Beim <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Kernel] Triton Configs for Fp8 Block Quantization ( vllm-project#11589 ) … 689bd19 Signed-off-by: [email protected] <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [CPU][PPC] Updated torch, torchvision, torchaudio dependencies ( vllm-… … f7a4e12 …project#12555 )
Signed-off-by: npanpaliya <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [V1][Log] Add max request concurrency log to V1 ( vllm-project#12569 ) … 95b49be Signed-off-by: mgoin <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Kernel] Update cutlass_scaled_mm to support 2d group (blockwise) s… … b0d7288 …caling ( vllm-project#11868 )
Signed-off-by: Srikanth Srinivas <[email protected]> [ROCm][AMD][Model] llama 3.2 support upstreaming ( vllm-project#12421 ) … 9813962 Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Attention] MLA decode optimizations ( vllm-project#12528 ) … 897c8c2 Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Bugfix] Gracefully handle huggingface hub http error ( vllm-project#1… … c4795ce …2571 )
Signed-off-by: Srikanth Srinivas <[email protected]> Format … a5e6700 Signed-off-by: mgoin <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> Add favicon to docs ( vllm-project#12611 ) … 1ce860b Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [BugFix] Fix Torch.Compile For DeepSeek ( vllm-project#12594 ) … bc9d831 Co-authored-by: simon-mo <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Git] Automatically sign-off commits ( vllm-project#12595 ) … 22b918d It's very annoying when I forgot to add `-s` in `git commit` to
sign-off, because I then need to `git rebase HEAD~1 --signoff` and `git
push -f` to fix the DCO. This PR adds a hook to sign off commits
automatically when `-s` is missing to solve this problem. The only
change from the user side is now users have to install 2 hooks, so
instead of just
```
pre-commit install
```
Now we need to
```
pre-commit install --hook-type pre-commit --hook-type commit-msg
```
Note that even if users still only install the pre-commit hook, they
won't get any error in `git commit`. Just the sign-off hook won't run.
cc @hmellor @youkaichao ---------
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Docs][V1] Prefix caching design ( vllm-project#12598 ) … 00df0e4 - Create v1 design document section in docs.
- Add prefix caching design doc. @WoosukKwon @ywang96 ---------
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [v1][Bugfix] Add extra_keys to block_hash for prefix caching ( vllm-pr… … 44fa70d …oject#12603 )
This pr adds extra key to block hash, to generate different hash value
for two blocks with the same token string but different extra_keys in
their parent blocks. For example, it can generate different hash value
for the second block of the following two requests:
```python
request1 = make_request(
request_id=0,
prompt_token_ids=[_ for _ in range(6)],
mm_positions=[{
"offset": 0,
"length": 3
}, {
"offset": 3,
"length": 3
}],
mm_hashes=["hash1", "hash2"],
)
request2 = make_request(
request_id=1,
prompt_token_ids=[_ for _ in range(6)],
mm_positions=[{
"offset": 0,
"length": 3
}, {
"offset": 3,
"length": 3
}],
mm_hashes=["hash3", "hash2"],
)
```
---------
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [release] Add input step to ask for Release version ( vllm-project#12631 ) … fdd86fb Instead of having to create a new build with release version put in as
env var.
Signed-off-by: Srikanth Srinivas <[email protected]> [Bugfix] Revert MoE Triton Config Default ( vllm-project#12629 ) … c4a7c26 SUMMARY:
* previous PR for pulling in block configs also changed defaults
( https://github.com/vllm-project/vllm/pull/11589/files ) for FP8
* this broke L4 MoE since there was not enough SHM for the default
configuration
* this reverts the non-block example to the default
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Kernel][Quantization] Integrate block-quantized CUTLASS kernels for … … e7c98c6 …DeepSeekV3 ( vllm-project#12587 )
Integrates the block-quantized kernels introduced in vllm-project#11868 for use in linear
layers.
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Feature] Fix guided decoding blocking bitmask memcpy ( vllm-project#1… … d27e55d …2563 )
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

---------
Signed-off-by: Ryan N <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Doc] Improve installation signposting ( vllm-project#12575 ) … bece70b - Make device tab names more explicit
- Add comprehensive list of devices to https://docs.vllm.ai/en/latest/getting_started/installation/index.html - Add `attention` blocks to the intro of all devices that don't have
pre-built wheels/images
---------
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [Doc] int4 w4a16 example ( vllm-project#12585 ) … 6b7e433 Based on a request by @mgoin , with @kylesayrs we have added an example
doc for int4 w4a16 quantization, following the pre-existing int8 w8a8
quantization example and the example available in
[`llm-compressor`]( https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w4a16/llama3_example.py )
FIX #n/a (no issue created) @kylesayrs and I have discussed a couple additional improvements for the
quantization docs. We will revisit at a later date, possibly including:
- A section for "choosing the correct quantization scheme/ compression
technique"
- Additional vision or audio calibration datasets
---------
Signed-off-by: Brian Dellabetta <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Srikanth Srinivas <[email protected]> [V1] Bugfix: Validate Model Input Length ( vllm-project#12600 ) … fd9060b SUMMARY:
* avoid crashing the engine when we get an input longer than
max_model_len FIX vllm-project#12567 (*link existing issues this PR will resolve*)
Signed-off-by: Srikanth Srinivas <[email protected]> 18 hidden items Load more… srikanthsrnvs requested review from LiuXiaoxuanPKU , KuntaiDu , DarkLight1337 , ywang96 and zhuohan123 as code owners February 3, 2025 03:15 mergify bot added documentation Improvements or additions to documentation ci/build frontend structured-output speculative-decoding labels Feb 3, 2025 Copy link mergify bot commented Feb 3, 2025 This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @srikanthsrnvs . https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . mergify bot added v1 needs-rebase labels Feb 3, 2025 Merge branch 'main' into fix-moe-wna16-attention 8b5a0ea mergify bot removed
the needs-rebase label Feb 3, 2025 unused imports 9d09ec0 DarkLight1337 enabled auto-merge (squash) February 3, 2025 05:11 Copy link Contributor Author srikanthsrnvs commented Feb 3, 2025 Anyone know why the Docker image building fails? All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Copy link Member DarkLight1337 commented Feb 3, 2025 Not sure. It's also a problem on main so it's not related to this PR. We will force-merge if necessary. All reactions Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . youkaichao disabled auto-merge February 3, 2025 05:46 Hide details View details youkaichao merged commit b998645 into vllm-project : main Feb 3, 2025 24 of 38 checks passed Uh oh! There was an error while loading. Please reload this page . sahelib25 pushed a commit
to krai/vllm
that referenced
this pull request Feb 3, 2025 Fix for attention layers to remain unquantized during moe_wn16 quant ( v… … 576c903 …llm-project#12570 )
Fix to AWQ quant loading of the new R1 model
The new optimized MoE kernels for a large number of experts `moe_wn16`
uses AWQ quant which requires the attention layers to be in 16bit
The current merge has broken this, and the `get_quant_method` must
return None for it to work correctly again
---------
Signed-off-by: Srikanth Srinivas <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Beim <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: npanpaliya <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Ryan N <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Vicente Herrera <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Shawn Du <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Beim <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Ryan Nguyen <[email protected]>
Co-authored-by: Brian Dellabetta <[email protected]>
Co-authored-by: fade_away <[email protected]>
Co-authored-by: weilong.yu <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Eldar Kurtic <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Vicente Herrera <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Shawn Du <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: youkaichao <[email protected]> shreyankg pushed a commit
to shreyankg/vllm
that referenced
this pull request May 3, 2025 Fix for attention layers to remain unquantized during moe_wn16 quant ( v… … e145287 …llm-project#12570 )
Fix to AWQ quant loading of the new R1 model
The new optimized MoE kernels for a large number of experts `moe_wn16`
uses AWQ quant which requires the attention layers to be in 16bit
The current merge has broken this, and the `get_quant_method` must
return None for it to work correctly again
---------
Signed-off-by: Srikanth Srinivas <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Beim <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: npanpaliya <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Ryan N <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Vicente Herrera <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Shawn Du <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Beim <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: simon-mo <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Ryan Nguyen <[email protected]>
Co-authored-by: Brian Dellabetta <[email protected]>
Co-authored-by: fade_away <[email protected]>
Co-authored-by: weilong.yu <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Eldar Kurtic <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Vicente Herrera <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Shawn Du <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: youkaichao <[email protected]> Sign up for free to join this conversation on GitHub .
Already have an account? Sign in to comment
|
2025-09-07 17:52:46
|
d4bc1a4d248a5d23e1f731ecb53511a9a54f5dfc
|
No PR found
| false | false | false | false |
NO_PR
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.