Update README.md
Browse files
README.md
CHANGED
@@ -350,15 +350,3 @@ We extend our heartfelt gratitude to the following open-source projects and comm
|
|
350 |
* ⚡ [FlashAttention](https://github.com/Dao-AILab/flash-attention) - Memory-efficient attention
|
351 |
* 🚀 [FlashInfer](https://github.com/flashinfer-ai/flashinfer) - Optimized inference engine
|
352 |
|
353 |
-
## 🌟🚀 Github Star History
|
354 |
-
|
355 |
-
[](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
|
356 |
-
[](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
|
357 |
-
|
358 |
-
<a href="https://star-history.com/#Tencent-Hunyuan/HunyuanImage-3.0&Date">
|
359 |
-
<picture>
|
360 |
-
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Tencent-Hunyuan/HunyuanImage-3.0&type=Date1&theme=dark" />
|
361 |
-
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Tencent-Hunyuan/HunyuanImage-3.0&type=Date1" />
|
362 |
-
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Tencent-Hunyuan/HunyuanImage-3.0&type=Date1" />
|
363 |
-
</picture>
|
364 |
-
</a>
|
|
|
350 |
* ⚡ [FlashAttention](https://github.com/Dao-AILab/flash-attention) - Memory-efficient attention
|
351 |
* 🚀 [FlashInfer](https://github.com/flashinfer-ai/flashinfer) - Optimized inference engine
|
352 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|