finalf0 commited on
Commit
fd7f352
Β·
verified Β·
1 Parent(s): 18accbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -22,10 +22,10 @@ tags:
22
 
23
  #### πŸ“Œ Pinned
24
 
25
- * [2025.01.14] πŸ”₯πŸ”₯ πŸ”₯ We open source [MiniCPM-o 2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6), with significant performance improvement over MiniCPM-V 2.6, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now.
26
 
27
  * [2024.08.10] πŸš€πŸš€πŸš€ MiniCPM-Llama3-V 2.5 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf).
28
- * [2024.08.06] πŸ”₯πŸ”₯πŸ”₯ We open-source [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6), which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now!
29
  * [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/docs/MiniCPM_Llama3_V_25_technical_report.pdf).
30
  * [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#vllm).
31
  * [2024.05.28] πŸ’« We now support LoRA fine-tuning for MiniCPM-Llama3-V 2.5, using only 2 V100 GPUs! See more statistics [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics).
 
22
 
23
  #### πŸ“Œ Pinned
24
 
25
+ * [2025.01.14] πŸ”₯πŸ”₯ πŸ”₯ We open source [**MiniCPM-o 2.6**](https://huggingface.co/openbmb/MiniCPM-o-2_6), with significant performance improvement over **MiniCPM-V 2.6**, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now.
26
 
27
  * [2024.08.10] πŸš€πŸš€πŸš€ MiniCPM-Llama3-V 2.5 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf).
28
+ * [2024.08.06] πŸ”₯πŸ”₯πŸ”₯ We open-source [**MiniCPM-V 2.6**](https://huggingface.co/openbmb/MiniCPM-V-2_6), which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now!
29
  * [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/docs/MiniCPM_Llama3_V_25_technical_report.pdf).
30
  * [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#vllm).
31
  * [2024.05.28] πŸ’« We now support LoRA fine-tuning for MiniCPM-Llama3-V 2.5, using only 2 V100 GPUs! See more statistics [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics).