Improve model card and add metadata (#1)
Browse files- Improve model card and add metadata (0d7b2dffe5726f5b9f4351085071b025b162ef02)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
+
library_name: transformers
|
5 |
+
---
|
6 |
+
|
7 |
+
# SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models
|
8 |
+
|
9 |
+
|
10 |
+
This model, VLAA-Thinker-Qwen2VL-7B, is a vision-language model fine-tuned on the VLAA-Thinking dataset. As described in [](https://huggingface.co/papers/2504.11468), it leverages a combination of supervised fine-tuning (SFT) and reinforcement learning (RL) to improve reasoning capabilities in LLMs. The model excels in multimodal reasoning tasks, achieving state-of-the-art performance on the OpenCompass Multimodal Reasoning Leaderboard as of April 7th, 2025.
|
11 |
+
|
12 |
+
<p align="center">
|
13 |
+
🌐 <a href="https://ucsc-vlaa.github.io/VLAA-Thinking/" target="_blank">Project Page</a>
|
14 |
+
• <img src="./assets/ar.svg" alt="Arxiv Logo" style="height: 1em; vertical-align: middle; margin-right: 0.3em;">
|
15 |
+
<a href="./assets/VLAA-Thinker.pdf" target="_blank">Arxiv</a>
|
16 |
+
• 💻 <a href="https://github.com/UCSC-VLAA/VLAA-Thinking" target="_blank">Code</a>
|
17 |
+
</p>
|
18 |
+
|
19 |
+
|
20 |
+
Both **VLAA-Thinker-Qwen2.5-3B** and **VLAA-Thinker-Qwen2.5-7B** achieve **SOTA** performance on [OpenCompass Multimodal Reasoning Leaderboard](https://rank.opencompass.org.cn/leaderboard-multimodal-reasoning/?m=REALTIME) as of April 7th, 2025.
|
21 |
+
<img src="assets/opencompass_4b_box.png" width = "640" alt="pipeline" align=center />
|
22 |
+
|
23 |
+
-----
|
24 |
+
|
25 |
+
<img src="assets/opencompass_7b_box.png" width = "640" alt="pipeline" align=center />
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
|
30 |
+
## Quick Start 🚀
|
31 |
+
### Inference
|
32 |
+
Run `python inference.py`. Note that our model is trained with a system prompt. Please ensure that it is included for inference.
|
33 |
+
|
34 |
+
|
35 |
+
### Dataset Download
|
36 |
+
Run `bash ./utils/download_dataset.sh`. Specify the dataset root with absolute path. The dataset should be ordered as follows:
|
37 |
+
```
|
38 |
+
├── VLAA-Thinking-SFT-126K.json
|
39 |
+
├── VLAA-Thinking-GRPO-25K.json
|
40 |
+
└── images
|
41 |
+
├── allava_laion
|
42 |
+
├── arxivqa
|
43 |
+
├── chartqa
|
44 |
+
├── clevr_math
|
45 |
+
├── coco
|
46 |
+
│ └── train2017
|
47 |
+
├── docvqa
|
48 |
+
├── geoqa170k
|
49 |
+
├── synthesis
|
50 |
+
├── vg
|
51 |
+
│ ├── VG_100K
|
52 |
+
│ └── VG_100K_2
|
53 |
+
└── vizwiz
|
54 |
+
```
|
55 |
+
### Training
|
56 |
+
Code coming soon!
|
57 |
+
|
58 |
+
(Rest of the README content can be kept as is)
|