--- license: apache-2.0 task_categories: - question-answering language: - en size_categories: - 1M GitHub Code arXiv Website

# 📦 Spatial Perception And Reasoning Dataset (SPAR-7M) > A large-scale vision-language dataset designed for **spatial perception and reasoning**. **SPAR-7M** contains over **7 million QA pairs** across **33 diverse spatial tasks**, generated from **4,500+ richly annotated 3D indoor scenes**. It supports **single-view**, **multi-view**, and **video-based** image inputs, and features both **perception** and **reasoning**-oriented question types. This dataset serves as the foundation for [SPAR-Bench](https://huggingface.co/datasets/jasonzhango/SPAR-Bench), and is suitable for **pretraining**, **multitask learning**, and **spatial grounding** research. This version supports **single-view**, **multi-view**, and **video-based** inputs. ## 📥 Download We provide **two versions** of the dataset: | Version | Description | |------------------|---------------------------------------------------------------------| | `SPAR-7M` | RGB-only images + QA annotations | | `SPAR-7M-RGBD` | Includes **depths**, **camera intrinsics**, and **pose matrices** for 3D-aware training You can download both versions from **Hugging Face**: ```bash # Download SPAR-7M (default) huggingface-cli download jasonzhango/SPAR-7M --repo-type dataset # Download SPAR-7M-RGBD (with depth and camera parameters) huggingface-cli download jasonzhango/SPAR-7M-RGBD --repo-type dataset ``` These datasets are split into multiple .tar.gz parts due to Hugging Face file size limits. After downloading all parts, run the following to extract: ``` # For SPAR-7M cat spar-*.tar.gz | tar -xvzf - # For SPAR-7M-RGBD cat spar-rgbd-*.tar.gz | tar -xvzf - ``` Alternatively, if Hugging Face is not accessible, you can use the [provided script](https://hf-mirror.com/): ``` wget https://hf-mirror.com/hfd/hfd.sh chmod a+x hfd.sh export HF_ENDPOINT=https://hf-mirror.com ./hfd.sh jasonzhango/SPAR-7M --dataset ./hfd.sh jasonzhango/SPAR-7M-RGBD --dataset ``` The dataset directory structure is: ``` spar/ ├── rxr/ ├── scannet/ │ ├── images/ │ | └── scene0000_00/ │ | ├── image_color/ │ | ├── video_color/ │ | ├── image_depth/ # only in SPAR-7M-RGBD │ | ├── video_depth/ # only in SPAR-7M-RGBD │ | ├── pose/ # only in SPAR-7M-RGBD │ | ├── video_pose/ # only in SPAR-7M-RGBD │ | ├── intrinsic/ # only in SPAR-7M-RGBD │ | └── video_idx.txt │ └── qa_jsonl/ │ ├── train/ │ | ├── depth_prediction_oo/ │ | | ├── fill/ │ | | | └── fill_76837.jsonl │ | | ├── select/ │ | | └── sentence/ │ | ├── obj_spatial_relation_oc/ │ | └── spatial_imagination_oo_mv/ │ └── val/ ├── scannetpp/ └── structured3d/ ``` Each QA task (e.g., `depth_prediction_oc`, `spatial_relation_oo_mv`, etc.) is organized by **task type**, with subfolders for different **answer formats**: - `fill/` — numerical or descriptive answers - `select/` — multiple choice - `sentence/` — natural language answers ## 📚 Bibtex If you find this project or dataset helpful, please consider citing our paper: ```bibtex @article{zhang2025from, title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D}, author={Zhang, Jiahui and Chen, Yurui and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Zhou, Yanpeng and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li}, year={2025}, journal={arXiv preprint arXiv:2503.22976}, } ```