Enhance model card: Metadata, links, and usage example
Browse filesThis PR significantly improves the model card for **SpatialThinker-7B** by adding crucial metadata, relevant external links, and a practical usage example.
Specifically, it addresses the following:
- **Adds metadata**: Sets `license: apache-2.0`, `library_name: transformers` (enabling automated code snippets), and `pipeline_tag: image-text-to-text` (improving discoverability for multimodal tasks).
- **Updates content**: Replaces placeholder text with the paper's abstract, a detailed model description, and relevant sections from the GitHub README (Updates, Requirements, Installation, Training, Evaluation, Acknowledgements).
- **Includes links**: Adds direct links to the Hugging Face paper page ([SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards](https://huggingface.co/papers/2511.07403)), the project page (`https://hunarbatra.com/SpatialThinker/`), and the GitHub repository (`https://github.com/hunarbatra/SpatialThinker`).
- **Provides a usage example**: Adds a clear Python code snippet demonstrating how to load and use the model with the `transformers` library for image-text inference, derived from common `transformers` patterns for QwenVL models and the overall context.
These enhancements will make the model more accessible, discoverable, and easier to use for the Hugging Face community.
|
@@ -1,203 +1,241 @@
|
|
| 1 |
---
|
| 2 |
-
datasets:
|
| 3 |
-
- OX-PIXL/STVQA-7K
|
| 4 |
base_model:
|
| 5 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
## Model Details
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
- **Model type:** [More Information Needed]
|
| 28 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 29 |
-
- **License:** [More Information Needed]
|
| 30 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
-
|
| 37 |
-
- **Paper [optional]:** [More Information Needed]
|
| 38 |
-
- **Demo [optional]:** [More Information Needed]
|
| 39 |
|
| 40 |
-
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
|
|
|
| 59 |
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
|
|
|
| 71 |
|
| 72 |
-
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
-
|
|
|
|
|
|
|
| 79 |
|
| 80 |
## Training Details
|
| 81 |
|
| 82 |
-
### Training Data
|
| 83 |
-
|
| 84 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 85 |
-
|
| 86 |
-
[More Information Needed]
|
| 87 |
-
|
| 88 |
### Training Procedure
|
| 89 |
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
#### Preprocessing [optional]
|
| 93 |
-
|
| 94 |
-
[More Information Needed]
|
| 95 |
|
|
|
|
| 96 |
|
| 97 |
-
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
-
|
| 104 |
|
| 105 |
-
|
|
|
|
|
|
|
| 106 |
|
| 107 |
## Evaluation
|
| 108 |
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
###
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
###
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## More Information [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Authors [optional]
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
| 200 |
-
|
| 201 |
-
## Model Card Contact
|
| 202 |
-
|
| 203 |
-
[More Information Needed]
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 4 |
+
datasets:
|
| 5 |
+
- OX-PIXL/STVQA-7K
|
| 6 |
+
license: apache-2.0
|
| 7 |
+
library_name: transformers
|
| 8 |
+
pipeline_tag: image-text-to-text
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards
|
| 12 |
|
| 13 |
+
<p align="center">
|
| 14 |
+
<a href="https://huggingface.co/papers/2511.07403">
|
| 15 |
+
<img src="https://img.shields.io/badge/Paper-2511.07403-b31b1b.svg" alt="Paper">
|
| 16 |
+
</a>
|
| 17 |
+
<a href="https://hunarbatra.com/SpatialThinker">
|
| 18 |
+
<img src="https://img.shields.io/badge/🌐%20Project%20Page-blue.svg" alt="Project Page">
|
| 19 |
+
</a>
|
| 20 |
+
<a href="https://github.com/hunarbatra/SpatialThinker">
|
| 21 |
+
<img src="https://img.shields.io/badge/GitHub%20Repo-black?logo=github" alt="GitHub Repo">
|
| 22 |
+
</a>
|
| 23 |
+
<a href="https://huggingface.co/collections/OX-PIXL/spatialthinker">
|
| 24 |
+
<img src="https://img.shields.io/badge/🤗%20Models%20%26%20Dataset-orange.svg" alt="Hugging Face Models">
|
| 25 |
+
</a>
|
| 26 |
+
</p>
|
| 27 |
|
| 28 |
+
## Model Description
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
Multimodal large language models (MLLMs) have achieved remarkable progress in vision–language tasks, but they continue to struggle with spatial understanding. Existing spatial MLLMs often rely on explicit 3D inputs or architecture-specific modifications, and remain constrained by large-scale datasets or sparse supervision. To address these limitations, we introduce **SpatialThinker**, a 3D-aware MLLM trained with RL to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards.
|
| 31 |
|
| 32 |
+
**SpatialThinker** consists of two key contributions:
|
| 33 |
+
1. A data synthesis pipeline that generates **STVQA-7K**, a high-quality spatial VQA dataset.
|
| 34 |
+
2. Online RL with a multi-objective dense spatial reward enforcing spatial grounding.
|
| 35 |
|
| 36 |
+
**SpatialThinker-7B** outperforms supervised fine-tuning and the sparse RL baseline on spatial understanding and real-world VQA benchmarks, nearly doubling the base-model gain compared to sparse RL, and surpassing GPT-4o. These results showcase the effectiveness of combining spatial supervision with reward-aligned reasoning in enabling robust 3D spatial understanding with limited data and advancing MLLMs towards human-level visual reasoning.
|
| 37 |
|
| 38 |
+
<p align="center">
|
| 39 |
+
<img src="https://github.com/hunarbatra/SpatialThinker/raw/main/assets/spatialthinker.jpg" width="60%" alt="SpatialThinker Overview">
|
| 40 |
+
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
## Model Details
|
| 43 |
|
| 44 |
+
* **Developed by:** Hunar Batra, Haoqin Tu, Hardy Chen, Yuanze Lin, Cihang Xie, Ronald Clark
|
| 45 |
+
* **Model type:** 3D-aware Multimodal Large Language Model (MLLM)
|
| 46 |
+
* **Language(s) (NLP):** English
|
| 47 |
+
* **License:** Apache-2.0
|
| 48 |
+
* **Finetuned from model:** [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
|
| 49 |
|
| 50 |
+
### Model Sources
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
* **Repository:** [https://github.com/hunarbatra/SpatialThinker](https://github.com/hunarbatra/SpatialThinker)
|
| 53 |
+
* **Paper:** [https://huggingface.co/papers/2511.07403](https://huggingface.co/papers/2511.07403)
|
| 54 |
+
* **Project Page:** [https://hunarbatra.com/SpatialThinker/](https://hunarbatra.com/SpatialThinker/)
|
| 55 |
|
| 56 |
+
## How to Get Started with the Model
|
| 57 |
|
| 58 |
+
This model can be loaded and used directly with the Hugging Face `transformers` library.
|
| 59 |
|
| 60 |
+
First, ensure you have the necessary dependencies installed:
|
| 61 |
|
| 62 |
+
```bash
|
| 63 |
+
pip install transformers>=4.49.0
|
| 64 |
+
pip install flash-attn>=2.4.3 vllm>=0.7.3 # (vllm 0.8.0 recommended)
|
| 65 |
+
```
|
| 66 |
|
| 67 |
+
Then, you can use the following Python code snippet for inference:
|
| 68 |
|
| 69 |
+
```python
|
| 70 |
+
import torch
|
| 71 |
+
from transformers import AutoProcessor, AutoModelForCausalLM
|
| 72 |
+
from PIL import Image
|
| 73 |
+
import requests
|
| 74 |
+
from io import BytesIO
|
| 75 |
|
| 76 |
+
# Load model and processor
|
| 77 |
+
model_id = "OX-PIXL/SpatialThinker-7B" # This is the model repository ID
|
| 78 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
| 79 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 80 |
+
model_id,
|
| 81 |
+
torch_dtype=torch.bfloat16, # Use bfloat16 for better performance on compatible GPUs
|
| 82 |
+
device_map="auto",
|
| 83 |
+
trust_remote_code=True # Required for custom Qwen2.5-VL architecture
|
| 84 |
+
).eval() # Set model to evaluation mode
|
| 85 |
|
| 86 |
+
# Example image (replace with your own image path or URL)
|
| 87 |
+
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
|
| 88 |
+
response = requests.get(image_url)
|
| 89 |
+
image = Image.open(BytesIO(response.content)).convert("RGB")
|
| 90 |
|
| 91 |
+
# Define a spatial reasoning question
|
| 92 |
+
question = "What are the spatial relationships between the car, the road, and the trees?"
|
| 93 |
|
| 94 |
+
# Construct chat messages
|
| 95 |
+
messages = [
|
| 96 |
+
{"role": "user", "content": [
|
| 97 |
+
{"type": "image", "image": image},
|
| 98 |
+
{"type": "text", "text": question}
|
| 99 |
+
]}
|
| 100 |
+
]
|
| 101 |
|
| 102 |
+
# Apply chat template and process inputs
|
| 103 |
+
prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 104 |
+
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
|
| 105 |
|
| 106 |
+
# Generate response
|
| 107 |
+
with torch.no_grad():
|
| 108 |
+
output_ids = model.generate(**inputs, max_new_tokens=512, do_sample=False) # Use suitable generation parameters
|
| 109 |
+
response_text = processor.decode(output_ids[0], skip_special_tokens=True)
|
| 110 |
|
| 111 |
+
print(f"Question: {question}
|
| 112 |
+
")
|
| 113 |
+
print(f"Answer: {response_text}")
|
| 114 |
+
```
|
| 115 |
|
| 116 |
+
## Updates
|
| 117 |
|
| 118 |
+
* **[2025/11/11]** 🔥 Code base released.
|
| 119 |
+
* **[2025/11/08]** 🔥 Model Checkpoints and Dataset released.
|
| 120 |
|
| 121 |
+
## Requirements
|
| 122 |
|
| 123 |
+
* Python 3.9+
|
| 124 |
+
* `transformers >= 4.49.0`
|
| 125 |
+
* `flash-attn >= 2.4.3`
|
| 126 |
+
* `vllm >= 0.7.3` (0.8.0 recommended)
|
| 127 |
|
| 128 |
+
## Installation
|
| 129 |
|
| 130 |
+
```bash
|
| 131 |
+
pip install -e .
|
| 132 |
+
```
|
| 133 |
|
| 134 |
## Training Details
|
| 135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
### Training Procedure
|
| 137 |
|
| 138 |
+
SpatialThinker models are trained with STVQA-7K, Dense Spatial Rewards + GRPO. Baseline models (Vanilla GRPO) are also trained with STVQA-7K.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
| 140 |
+
#### Train **SpatialThinker Models** with STVQA-7K, Dense Spatial Rewards + GRPO
|
| 141 |
|
| 142 |
+
```bash
|
| 143 |
+
bash scripts/spatialthinker_3b_grpo.sh
|
| 144 |
+
bash scripts/spatialthinker_7b_grpo.sh
|
| 145 |
+
```
|
| 146 |
|
| 147 |
+
#### Train **Baseline Models** (Vanilla GRPO) with STVQA-7K
|
| 148 |
|
| 149 |
+
```bash
|
| 150 |
+
bash scripts/qwen_2_5_3b_stvqa_vanilla_grpo.sh
|
| 151 |
+
bash scripts/qwen_2_5_7b_stvqa_vanilla_grpo.sh
|
| 152 |
+
```
|
| 153 |
|
| 154 |
+
### Merge Checkpoints to Hugging Face Format
|
| 155 |
|
| 156 |
+
```bash
|
| 157 |
+
python3 scripts/model_merger.py --local_dir path_to_your_last_actor_checkpoint
|
| 158 |
+
```
|
| 159 |
|
| 160 |
## Evaluation
|
| 161 |
|
| 162 |
+
To evaluate **SpatialThinker** or baseline models across spatial reasoning benchmarks, use the provided `evaluation/eval.py` script.
|
| 163 |
+
|
| 164 |
+
### Basic Command Structure
|
| 165 |
+
|
| 166 |
+
```bash
|
| 167 |
+
python3 evaluation/eval.py \
|
| 168 |
+
--dataset <dataset_name> \
|
| 169 |
+
--template <prompt_template> \ # e.g. `reasoning`, `no_reasoning`, `spatial_thinker`
|
| 170 |
+
--model_path <model_or_checkpoint> \
|
| 171 |
+
--cuda <gpu_id> \
|
| 172 |
+
--batch_size <num_samples_per_step> \
|
| 173 |
+
[--provider <inference_backend>] \
|
| 174 |
+
[--processor_name <tokenizer_or_processor>] \
|
| 175 |
+
[--custom_filename <output_name>]
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
### Example: Evaluate Across Multiple Benchmarks
|
| 179 |
+
|
| 180 |
+
```bash
|
| 181 |
+
python3 evaluation/eval.py \
|
| 182 |
+
--dataset blink-spatial \
|
| 183 |
+
--template spatial_thinker \
|
| 184 |
+
--model_path OX-PIXL/SpatialThinker-3B \
|
| 185 |
+
--cuda 0 \
|
| 186 |
+
--batch_size 4
|
| 187 |
+
```
|
| 188 |
+
```bash
|
| 189 |
+
python3 evaluation/eval.py \
|
| 190 |
+
--dataset spatialbench \
|
| 191 |
+
--template spatial_thinker \
|
| 192 |
+
--model_path OX-PIXL/SpatialThinker-3B \
|
| 193 |
+
--cuda 0 \
|
| 194 |
+
--batch_size 2
|
| 195 |
+
```
|
| 196 |
+
|
| 197 |
+
### Example: Evaluate Using an API Provider (OpenAI / Anthropic)
|
| 198 |
+
|
| 199 |
+
```bash
|
| 200 |
+
python3 evaluation/eval.py \
|
| 201 |
+
--dataset stvqa \
|
| 202 |
+
--template reasoning \
|
| 203 |
+
--model_path gpt-4o-2024-05-13 \
|
| 204 |
+
--provider openai \
|
| 205 |
+
--batch_size 1
|
| 206 |
+
```
|
| 207 |
+
```bash
|
| 208 |
+
python3 evaluation/eval.py \
|
| 209 |
+
--dataset stvqa \
|
| 210 |
+
--template reasoning \
|
| 211 |
+
--model_path claude-3-5-sonnet \
|
| 212 |
+
--provider anthropic \
|
| 213 |
+
--batch_size 1
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
### Supported Evaluation Datasets
|
| 217 |
+
`cv-bench`, `cv-bench-2D`, `cv-bench-3D`, `blink-spatial`, `blink-depth`, `blink-object`,
|
| 218 |
+
`blink-counting`, `blink-multi-view`, `blink-jigsaw`, `realworld_qa`, `spatialbench`, `mmvp`, `3dsrbench`,
|
| 219 |
+
`lego`, `spatialreasoner`, `robospatial`, `robospatial_rgb`, `stvqa`, `hallusionbench`.
|
| 220 |
+
|
| 221 |
+
## Citation
|
| 222 |
+
|
| 223 |
+
If you find this repository useful in your project, please consider giving a ⭐ and citing:
|
| 224 |
+
|
| 225 |
+
```bibtex
|
| 226 |
+
@misc{batra2025spatialthinkerreinforcing3dreasoning,
|
| 227 |
+
title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
|
| 228 |
+
author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
|
| 229 |
+
year={2025},
|
| 230 |
+
eprint={2511.07403},
|
| 231 |
+
archivePrefix={arXiv},
|
| 232 |
+
primaryClass={cs.CV},
|
| 233 |
+
url={https://arxiv.org/abs/2511.07403},
|
| 234 |
+
}
|
| 235 |
+
```
|
| 236 |
+
|
| 237 |
+
## Acknowledgements
|
| 238 |
+
This project builds upon the following open-source frameworks and works:
|
| 239 |
+
- [**EasyR1**](https://github.com/hiyouga/EasyR1) — An efficient, scalable, multi-modality RL training framework based on veRL
|
| 240 |
+
- [**LLaMA-Factory**](https://github.com/hunarbatra/LLaMA-Factory) — Unified efficient fine-tuning of 100+ LLMs & VLMs
|
| 241 |
+
- [**Qwen2.5-VL**](https://arxiv.org/abs/2502.13923) — Multimodal LLM series from the Qwen family
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|