Update README.md
Browse files
README.md
CHANGED
@@ -275,7 +275,7 @@ lm_eval \
|
|
275 |
## Inference Performance
|
276 |
|
277 |
|
278 |
-
This model achieves up to 1.4x speedup in single-stream deployment and up to
|
279 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
280 |
|
281 |
<details>
|
|
|
275 |
## Inference Performance
|
276 |
|
277 |
|
278 |
+
This model achieves up to 1.4x speedup in single-stream deployment and up to 3.0x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
|
279 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
280 |
|
281 |
<details>
|