Safetensors
qwen2_5_vl
nielsr HF Staff commited on
Commit
b9687dc
·
verified ·
1 Parent(s): 2bc693b

Enhance model card: Metadata, links, and usage example

Browse files

This PR significantly improves the model card for **SpatialThinker-7B** by adding crucial metadata, relevant external links, and a practical usage example.

Specifically, it addresses the following:
- **Adds metadata**: Sets `license: apache-2.0`, `library_name: transformers` (enabling automated code snippets), and `pipeline_tag: image-text-to-text` (improving discoverability for multimodal tasks).
- **Updates content**: Replaces placeholder text with the paper's abstract, a detailed model description, and relevant sections from the GitHub README (Updates, Requirements, Installation, Training, Evaluation, Acknowledgements).
- **Includes links**: Adds direct links to the Hugging Face paper page ([SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards](https://huggingface.co/papers/2511.07403)), the project page (`https://hunarbatra.com/SpatialThinker/`), and the GitHub repository (`https://github.com/hunarbatra/SpatialThinker`).
- **Provides a usage example**: Adds a clear Python code snippet demonstrating how to load and use the model with the `transformers` library for image-text inference, derived from common `transformers` patterns for QwenVL models and the overall context.

These enhancements will make the model more accessible, discoverable, and easier to use for the Hugging Face community.

Files changed (1) hide show
  1. README.md +193 -155
README.md CHANGED
@@ -1,203 +1,241 @@
1
  ---
2
- datasets:
3
- - OX-PIXL/STVQA-7K
4
  base_model:
5
  - Qwen/Qwen2.5-VL-7B-Instruct
 
 
 
 
 
6
  ---
7
 
8
- Paper: https://arxiv.org/abs/2511.07403
9
 
10
- # Model Card for Model ID
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
13
-
14
-
15
-
16
- ## Model Details
17
 
18
- ### Model Description
19
 
20
- <!-- Provide a longer summary of what this model is. -->
 
 
21
 
22
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
 
32
- ### Model Sources [optional]
33
 
34
- <!-- Provide the basic links for the model. -->
 
 
 
 
35
 
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
 
40
- ## Uses
 
 
41
 
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
 
44
- ### Direct Use
45
 
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
 
48
- [More Information Needed]
 
 
 
49
 
50
- ### Downstream Use [optional]
51
 
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
 
 
 
53
 
54
- [More Information Needed]
 
 
 
 
 
 
 
 
55
 
56
- ### Out-of-Scope Use
 
 
 
57
 
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
59
 
60
- [More Information Needed]
 
 
 
 
 
 
61
 
62
- ## Bias, Risks, and Limitations
 
 
63
 
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
65
 
66
- [More Information Needed]
 
 
 
67
 
68
- ### Recommendations
69
 
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
71
 
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
 
74
- ## How to Get Started with the Model
 
 
 
75
 
76
- Use the code below to get started with the model.
77
 
78
- [More Information Needed]
 
 
79
 
80
  ## Training Details
81
 
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
  ### Training Procedure
89
 
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
 
 
96
 
97
- #### Training Hyperparameters
 
 
 
98
 
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
 
101
- #### Speeds, Sizes, Times [optional]
 
 
 
102
 
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
 
105
- [More Information Needed]
 
 
106
 
107
  ## Evaluation
108
 
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in Lacoste et al. (2019).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
-
203
- [More Information Needed]
 
1
  ---
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ datasets:
5
+ - OX-PIXL/STVQA-7K
6
+ license: apache-2.0
7
+ library_name: transformers
8
+ pipeline_tag: image-text-to-text
9
  ---
10
 
11
+ # SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards
12
 
13
+ <p align="center">
14
+ <a href="https://huggingface.co/papers/2511.07403">
15
+ <img src="https://img.shields.io/badge/Paper-2511.07403-b31b1b.svg" alt="Paper">
16
+ </a>
17
+ <a href="https://hunarbatra.com/SpatialThinker">
18
+ <img src="https://img.shields.io/badge/🌐%20Project%20Page-blue.svg" alt="Project Page">
19
+ </a>
20
+ <a href="https://github.com/hunarbatra/SpatialThinker">
21
+ <img src="https://img.shields.io/badge/GitHub%20Repo-black?logo=github" alt="GitHub Repo">
22
+ </a>
23
+ <a href="https://huggingface.co/collections/OX-PIXL/spatialthinker">
24
+ <img src="https://img.shields.io/badge/🤗%20Models%20%26%20Dataset-orange.svg" alt="Hugging Face Models">
25
+ </a>
26
+ </p>
27
 
28
+ ## Model Description
 
 
 
 
29
 
30
+ Multimodal large language models (MLLMs) have achieved remarkable progress in vision–language tasks, but they continue to struggle with spatial understanding. Existing spatial MLLMs often rely on explicit 3D inputs or architecture-specific modifications, and remain constrained by large-scale datasets or sparse supervision. To address these limitations, we introduce **SpatialThinker**, a 3D-aware MLLM trained with RL to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards.
31
 
32
+ **SpatialThinker** consists of two key contributions:
33
+ 1. A data synthesis pipeline that generates **STVQA-7K**, a high-quality spatial VQA dataset.
34
+ 2. Online RL with a multi-objective dense spatial reward enforcing spatial grounding.
35
 
36
+ **SpatialThinker-7B** outperforms supervised fine-tuning and the sparse RL baseline on spatial understanding and real-world VQA benchmarks, nearly doubling the base-model gain compared to sparse RL, and surpassing GPT-4o. These results showcase the effectiveness of combining spatial supervision with reward-aligned reasoning in enabling robust 3D spatial understanding with limited data and advancing MLLMs towards human-level visual reasoning.
37
 
38
+ <p align="center">
39
+ <img src="https://github.com/hunarbatra/SpatialThinker/raw/main/assets/spatialthinker.jpg" width="60%" alt="SpatialThinker Overview">
40
+ </p>
 
 
 
 
41
 
42
+ ## Model Details
43
 
44
+ * **Developed by:** Hunar Batra, Haoqin Tu, Hardy Chen, Yuanze Lin, Cihang Xie, Ronald Clark
45
+ * **Model type:** 3D-aware Multimodal Large Language Model (MLLM)
46
+ * **Language(s) (NLP):** English
47
+ * **License:** Apache-2.0
48
+ * **Finetuned from model:** [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
49
 
50
+ ### Model Sources
 
 
51
 
52
+ * **Repository:** [https://github.com/hunarbatra/SpatialThinker](https://github.com/hunarbatra/SpatialThinker)
53
+ * **Paper:** [https://huggingface.co/papers/2511.07403](https://huggingface.co/papers/2511.07403)
54
+ * **Project Page:** [https://hunarbatra.com/SpatialThinker/](https://hunarbatra.com/SpatialThinker/)
55
 
56
+ ## How to Get Started with the Model
57
 
58
+ This model can be loaded and used directly with the Hugging Face `transformers` library.
59
 
60
+ First, ensure you have the necessary dependencies installed:
61
 
62
+ ```bash
63
+ pip install transformers>=4.49.0
64
+ pip install flash-attn>=2.4.3 vllm>=0.7.3 # (vllm 0.8.0 recommended)
65
+ ```
66
 
67
+ Then, you can use the following Python code snippet for inference:
68
 
69
+ ```python
70
+ import torch
71
+ from transformers import AutoProcessor, AutoModelForCausalLM
72
+ from PIL import Image
73
+ import requests
74
+ from io import BytesIO
75
 
76
+ # Load model and processor
77
+ model_id = "OX-PIXL/SpatialThinker-7B" # This is the model repository ID
78
+ processor = AutoProcessor.from_pretrained(model_id)
79
+ model = AutoModelForCausalLM.from_pretrained(
80
+ model_id,
81
+ torch_dtype=torch.bfloat16, # Use bfloat16 for better performance on compatible GPUs
82
+ device_map="auto",
83
+ trust_remote_code=True # Required for custom Qwen2.5-VL architecture
84
+ ).eval() # Set model to evaluation mode
85
 
86
+ # Example image (replace with your own image path or URL)
87
+ image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
88
+ response = requests.get(image_url)
89
+ image = Image.open(BytesIO(response.content)).convert("RGB")
90
 
91
+ # Define a spatial reasoning question
92
+ question = "What are the spatial relationships between the car, the road, and the trees?"
93
 
94
+ # Construct chat messages
95
+ messages = [
96
+ {"role": "user", "content": [
97
+ {"type": "image", "image": image},
98
+ {"type": "text", "text": question}
99
+ ]}
100
+ ]
101
 
102
+ # Apply chat template and process inputs
103
+ prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
104
+ inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
105
 
106
+ # Generate response
107
+ with torch.no_grad():
108
+ output_ids = model.generate(**inputs, max_new_tokens=512, do_sample=False) # Use suitable generation parameters
109
+ response_text = processor.decode(output_ids[0], skip_special_tokens=True)
110
 
111
+ print(f"Question: {question}
112
+ ")
113
+ print(f"Answer: {response_text}")
114
+ ```
115
 
116
+ ## Updates
117
 
118
+ * **[2025/11/11]** 🔥 Code base released.
119
+ * **[2025/11/08]** 🔥 Model Checkpoints and Dataset released.
120
 
121
+ ## Requirements
122
 
123
+ * Python 3.9+
124
+ * `transformers >= 4.49.0`
125
+ * `flash-attn >= 2.4.3`
126
+ * `vllm >= 0.7.3` (0.8.0 recommended)
127
 
128
+ ## Installation
129
 
130
+ ```bash
131
+ pip install -e .
132
+ ```
133
 
134
  ## Training Details
135
 
 
 
 
 
 
 
136
  ### Training Procedure
137
 
138
+ SpatialThinker models are trained with STVQA-7K, Dense Spatial Rewards + GRPO. Baseline models (Vanilla GRPO) are also trained with STVQA-7K.
 
 
 
 
139
 
140
+ #### Train **SpatialThinker Models** with STVQA-7K, Dense Spatial Rewards + GRPO
141
 
142
+ ```bash
143
+ bash scripts/spatialthinker_3b_grpo.sh
144
+ bash scripts/spatialthinker_7b_grpo.sh
145
+ ```
146
 
147
+ #### Train **Baseline Models** (Vanilla GRPO) with STVQA-7K
148
 
149
+ ```bash
150
+ bash scripts/qwen_2_5_3b_stvqa_vanilla_grpo.sh
151
+ bash scripts/qwen_2_5_7b_stvqa_vanilla_grpo.sh
152
+ ```
153
 
154
+ ### Merge Checkpoints to Hugging Face Format
155
 
156
+ ```bash
157
+ python3 scripts/model_merger.py --local_dir path_to_your_last_actor_checkpoint
158
+ ```
159
 
160
  ## Evaluation
161
 
162
+ To evaluate **SpatialThinker** or baseline models across spatial reasoning benchmarks, use the provided `evaluation/eval.py` script.
163
+
164
+ ### Basic Command Structure
165
+
166
+ ```bash
167
+ python3 evaluation/eval.py \
168
+ --dataset <dataset_name> \
169
+ --template <prompt_template> \ # e.g. `reasoning`, `no_reasoning`, `spatial_thinker`
170
+ --model_path <model_or_checkpoint> \
171
+ --cuda <gpu_id> \
172
+ --batch_size <num_samples_per_step> \
173
+ [--provider <inference_backend>] \
174
+ [--processor_name <tokenizer_or_processor>] \
175
+ [--custom_filename <output_name>]
176
+ ```
177
+
178
+ ### Example: Evaluate Across Multiple Benchmarks
179
+
180
+ ```bash
181
+ python3 evaluation/eval.py \
182
+ --dataset blink-spatial \
183
+ --template spatial_thinker \
184
+ --model_path OX-PIXL/SpatialThinker-3B \
185
+ --cuda 0 \
186
+ --batch_size 4
187
+ ```
188
+ ```bash
189
+ python3 evaluation/eval.py \
190
+ --dataset spatialbench \
191
+ --template spatial_thinker \
192
+ --model_path OX-PIXL/SpatialThinker-3B \
193
+ --cuda 0 \
194
+ --batch_size 2
195
+ ```
196
+
197
+ ### Example: Evaluate Using an API Provider (OpenAI / Anthropic)
198
+
199
+ ```bash
200
+ python3 evaluation/eval.py \
201
+ --dataset stvqa \
202
+ --template reasoning \
203
+ --model_path gpt-4o-2024-05-13 \
204
+ --provider openai \
205
+ --batch_size 1
206
+ ```
207
+ ```bash
208
+ python3 evaluation/eval.py \
209
+ --dataset stvqa \
210
+ --template reasoning \
211
+ --model_path claude-3-5-sonnet \
212
+ --provider anthropic \
213
+ --batch_size 1
214
+ ```
215
+
216
+ ### Supported Evaluation Datasets
217
+ `cv-bench`, `cv-bench-2D`, `cv-bench-3D`, `blink-spatial`, `blink-depth`, `blink-object`,
218
+ `blink-counting`, `blink-multi-view`, `blink-jigsaw`, `realworld_qa`, `spatialbench`, `mmvp`, `3dsrbench`,
219
+ `lego`, `spatialreasoner`, `robospatial`, `robospatial_rgb`, `stvqa`, `hallusionbench`.
220
+
221
+ ## Citation
222
+
223
+ If you find this repository useful in your project, please consider giving a ⭐ and citing:
224
+
225
+ ```bibtex
226
+ @misc{batra2025spatialthinkerreinforcing3dreasoning,
227
+ title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
228
+ author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
229
+ year={2025},
230
+ eprint={2511.07403},
231
+ archivePrefix={arXiv},
232
+ primaryClass={cs.CV},
233
+ url={https://arxiv.org/abs/2511.07403},
234
+ }
235
+ ```
236
+
237
+ ## Acknowledgements
238
+ This project builds upon the following open-source frameworks and works:
239
+ - [**EasyR1**](https://github.com/hiyouga/EasyR1) — An efficient, scalable, multi-modality RL training framework based on veRL
240
+ - [**LLaMA-Factory**](https://github.com/hunarbatra/LLaMA-Factory) Unified efficient fine-tuning of 100+ LLMs & VLMs
241
+ - [**Qwen2.5-VL**](https://arxiv.org/abs/2502.13923) — Multimodal LLM series from the Qwen family