Create README.md
Browse files
README.md
CHANGED
@@ -1,44 +1,48 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
|
|
|
|
|
|
|
|
|
1 |
+
<p align="left">
|
2 |
+
<a href="https://github.com/fudan-zvg/spar.git">
|
3 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-spar-black?&logo=github&logoColor=white" />
|
4 |
+
</a>
|
5 |
+
<a href="https://arxiv.org/abs/xxx">
|
6 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-spar-red?logo=arxiv" />
|
7 |
+
</a>
|
8 |
+
<a href="https://fudan-zvg.github.io/spar">
|
9 |
+
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-spar-blue" />
|
10 |
+
</a>
|
11 |
+
</p>
|
12 |
+
|
13 |
+
# 🎯 SPAR-Bench-RGBD
|
14 |
+
|
15 |
+
> A depth-enhanced version of SPAR-Bench for evaluating **3D-aware spatial reasoning** in vision-language models.
|
16 |
+
|
17 |
+
|
18 |
+
**SPAR-Bench-RGBD** extends the full SPAR-Bench with additional **depths**, **camera intrinsics**, and **pose information**, enabling evaluation of models with geometric or 3D-awareness capabilities.
|
19 |
+
|
20 |
+
The benchmark contains **7,207 manually verified QA pairs** across 20 spatial tasks and supports single-view and multi-view inputs.
|
21 |
+
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
## 📥 Load with `datasets`
|
26 |
+
|
27 |
+
```python
|
28 |
+
from datasets import load_dataset
|
29 |
+
spar_rgbd = load_dataset("jasonzhango/SPAR-Bench-RGBD")
|
30 |
+
```
|
31 |
+
## 🕹️ Evaluation
|
32 |
+
|
33 |
+
SPAR-Bench-RGBD uses the **same evaluation protocol and metrics** as the full [SPAR-Bench](https://huggingface.co/datasets/jasonzhango/SPAR-Bench).
|
34 |
+
|
35 |
+
We provide an **evaluation pipeline** in our [GitHub repository](https://github.com/hutchinsonian/spar), built on top of [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
|
36 |
+
|
37 |
+
## 📚 Bibtex
|
38 |
+
|
39 |
+
If you find this project or dataset helpful, please consider citing our paper:
|
40 |
+
|
41 |
+
```bibtex
|
42 |
+
@article{zhang2025from,
|
43 |
+
title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
|
44 |
+
author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
|
45 |
+
year={2025},
|
46 |
+
journal={arXiv preprint arXiv:xx},
|
47 |
+
}
|
48 |
+
```
|