WanyueZhang commited on
Commit
23151ae
·
verified ·
1 Parent(s): dc663bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -19
README.md CHANGED
@@ -1,40 +1,127 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- # MulSeT (Multi-view Spatial Understanding Tasks)
6
 
7
- Paper: [Why Do MLLMs Struggle with Spatial Understanding? A Systematic Analysis from Data to Architecture](https://arxiv.org/abs/2509.02359)
8
 
9
- Code: https://github.com/WanyueZhang-ai/spatial-understanding
10
-
11
- This repository currently hosts a demo version of our dataset to provide a preview. The full dataset is coming soon. Stay tuned!
12
 
13
  ![Dataset Overview](MulSeT_dataset_overview.png)
14
- *A high-level overview of MulSeT. [Download the full PDF here](MulSeT_dataset_overview.pdf).*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
 
16
 
17
- ## Dataset Description
18
 
19
- To evaluate multi-view spatial understanding, we designed three progressively challenging tasks that require integrating information across images. All tasks are formulated as four-option multiple-choice questions.
 
 
 
 
 
 
 
20
 
21
- ### Tasks
 
 
 
 
 
 
 
22
 
23
- **Occlusion Restoration** requires understanding the relative positions between views.
24
- Given two views of a scene, one object in the second image is masked. The model must identify the occluded object by leveraging information from both views, testing object correspondence in different views.
25
 
26
- **Distance Comparison** requires intuitive spatial understanding.
27
- The model is asked to find the object closest to a given reference object (shared across views) based on centroid distance, assessing spatial relation inference.
28
 
29
- **Azimuth Transfer** requires abstract spatial imagination and viewpoint-conditioned spatial reasoning.
30
- Assuming an egocentric viewpoint from the first image while facing a reference object, the model must determine the relative direction of a second object in the second view.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ### Construction Pipeline
33
 
34
- Thanks to the controllability of the simulator, we construct the dataset in a fully synthetic environment. Multiple indoor scenes are built in the simulator, where two images are captured per scene from different viewing angles. Each image pair contains both shared and exclusive objects, ensuring that some objects appear in only one of the two views.
35
 
36
- To construct each sample, we first obtain object-level scene metadata, including the 3D positions of objects in the world coordinate system. Based on this, we identify shared and exclusive objects across views. An automated script then traverses all object candidates, applying a visibility filter that retains only objects occupying at least a minimum area ratio (`min_area_ratio`) in the image. Following this, we generate the specific tasks. For **Occlusion Restoration**, we select a shared object and mask it in one view with a black rectangle bordered in red. For **Distance Comparison**, we choose a shared object and use the 3D coordinates of all objects to generate a question-answer pair about which is closest. For **Azimuth Transfer**, we select two exclusive objects and use their coordinates to compute the relative azimuth, enforcing an angular separation constraint (`angle_thresh_deg` >= 15°) to ensure sufficient directional distinction.
37
 
38
- ### Dataset Statistics
 
39
 
40
- Our benchmark contains over 38.2k question-answer pairs spanning more than 5,000 unique 3D scenes, with source imagery from the validation sets of AI2-THOR and replica_cad. For the following experiments, we use 30k data for training and 8.2k data for evaluation.
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - computer-vision
7
+ - visual-question-answering
8
+ - multi-modal
9
+ - spatial-reasoning
10
+ - synthetic-data
11
+ pretty_name: MulSeT
12
+ size_categories:
13
+ - 100k<n<1M
14
  ---
15
 
16
+ # MulSeT: A Benchmark for Multi-view Spatial Understanding Tasks
17
 
18
+ **Paper:** [Why Do MLLMs Struggle with Spatial Understanding? A Systematic Analysis from Data to Architecture](https://arxiv.org/abs/2509.02359)
19
 
20
+ **Code:** [https://github.com/WanyueZhang-ai/spatial-understanding](https://github.com/WanyueZhang-ai/spatial-understanding)
 
 
21
 
22
  ![Dataset Overview](MulSeT_dataset_overview.png)
23
+ *A high-level overview of the MulSeT benchmark. The dataset challenges models to integrate information from two distinct viewpoints of a 3D scene to answer spatial reasoning questions.*
24
+
25
+ ## Dataset Summary
26
+
27
+ **MulSeT** is a comprehensive benchmark designed to evaluate the multi-view spatial understanding capabilities of Multimodal Large Language Models (MLLMs). The core challenge lies in integrating visual information from two different viewpoints of a 3D scene to answer complex spatial questions.
28
+
29
+ All tasks are formulated as four-option multiple-choice questions, requiring models to perform sophisticated reasoning beyond simple object recognition. The dataset is synthetically generated using the **AI2-THOR** simulator, allowing for precise control over scene composition, object placement, and viewpoint selection.
30
+
31
+ ## Supported Tasks
32
+
33
+ The dataset is structured around three progressively challenging tasks:
34
+
35
+ * **1. Object Occlusion Restoration:** This task tests a model's ability to establish **object correspondence** across views. Given two images of a scene, an object is masked in the second image. The model must identify the occluded object by leveraging the context from the first, unoccluded view.
36
+
37
+ * **2. Relative Distance Comparison:** This task assesses a model's capacity for **spatial relation inference**. The model is presented with a reference object and must determine which of the other objects in the scene is closest to it, based on their 3D centroid distances.
38
+
39
+ * **3. Egocentric Azimuth Transfer:** This is the most challenging task, requiring **abstract spatial imagination and viewpoint transformation**. The model must adopt an egocentric perspective from the first image, facing a reference object, and then determine the relative direction (e.g., "left front", "right rear") of a target object visible in the second image.
40
+
41
+ ## Dataset Structure
42
+
43
+ The dataset is divided into `train` and `test` splits, with each split containing data for the three tasks in separate `.jsonl` files.
44
+
45
+ ### Data Splits
46
+
47
+ The dataset contains a total of **106,814** question-answer pairs.
48
+
49
+ | Split | Task | Number of Samples |
50
+ | :---- | :------------------------- | ----------------: |
51
+ | | Object Occlusion Restoration (`merged_mask_questions.jsonl`) | 47,621 |
52
+ | Train | Relative Distance Comparison (`merged_distance_questions.jsonl`)| 15,907 |
53
+ | | Egocentric Azimuth Transfer (`merged_direction_questions.jsonl`)| 35,092 |
54
+ | | **Train Total** | **98,620** |
55
+ | | | |
56
+ | | Object Occlusion Restoration (`merged_mask_questions.jsonl`) | 3,453 |
57
+ | Test | Relative Distance Comparison (`merged_distance_questions.jsonl`)| 2,930 |
58
+ | | Egocentric Azimuth Transfer (`merged_direction_questions.jsonl`)| 1,791 |
59
+ | | **Test Total** | **8,174** |
60
+
61
+ *The test set is constructed from 4 distinct scenes, each with 20 different object arrangement stages, ensuring a robust evaluation of model generalization.*
62
 
63
+ ### Data Fields
64
 
65
+ Each line in the `.jsonl` files represents a single data instance. While the exact fields vary slightly by task, the core structure is consistent.
66
 
67
+ **Common Fields:**
68
+ * `id`: A unique identifier for the sample.
69
+ * `query`: (string) The multiple-choice question presented to the model.
70
+ * `answer`: (string) The correct option's letter (e.g., "A", "B").
71
+ * `image_1`: (string) The file path to the first image.
72
+ * `image_2`: (string) The file path to the second image.
73
+ * `options`: (list of strings/dicts) The list of possible choices for the question.
74
+ * `folder_dir`: (string) The directory containing the associated images for this sample.
75
 
76
+ **Task-Specific Fields:**
77
+ * **Occlusion Restoration:**
78
+ * `object_name`: The name of the masked object.
79
+ * `mask_image`: The file path to the second image with the object masked.
80
+ * **Azimuth Transfer:**
81
+ * `src_object`: The reference object for establishing the viewpoint direction.
82
+ * `tgt_object`: The target object whose relative direction is being queried.
83
+ * `src_com`, `tgt_com`, `agent_pos`: Dictionaries containing the 3D world coordinates (x, y, z) of the source object, target object, and the agent's position, respectively.
84
 
85
+ ### Data Instance Example
 
86
 
87
+ Below is an example from the **Occlusion Restoration** task (`merged_mask_questions.jsonl`):
 
88
 
89
+ ```json
90
+ {
91
+ "id": 1,
92
+ "folder_dir": "train453/0",
93
+ "pair_name": "0-1:Fridge|2|1",
94
+ "query": "Please combine the information from the two images and answer the following question:\nWhat is the object being masked in the second image?\nNote: The masked region is indicated by the black rectangle outlined in red in the second image.\nChoose the correct option from the list below. Only respond with the corresponding letter.\n\nA. Book\nB. Fridge\nC. Chair\nD. CreditCard",
95
+ "answer": "B",
96
+ "object_name": "Fridge",
97
+ "image_1": "train453/0/rgb_0.png",
98
+ "image_2": "train453/0/rgb_1.png",
99
+ "mask_image": "train453/0/rgb_1_mask_Fridge|2|1.png",
100
+ "options": [
101
+ "Book",
102
+ "Fridge",
103
+ "Chair",
104
+ "CreditCard"
105
+ ]
106
+ }
107
+ ```
108
 
109
  ### Construction Pipeline
110
 
111
+ The dataset was constructed in a fully synthetic environment with source imagery from the validation sets of AI2THOR and replica_cad. For each sample, two images of an indoor scene are captured from different camera poses. The scene is carefully curated to include a mix of objects visible in both views (shared) and objects visible in only one (exclusive).
112
 
113
+ An automated pipeline was developed to generate the tasks. This pipeline leverages object-level metadata, such as 3D positions and visibility masks, to create valid and challenging question-answer pairs. For instance, in the Azimuth Transfer task, an angular separation constraint (angle_thresh_deg >= 15°) is enforced to ensure that the directional choices are distinct and unambiguous. This programmatic approach ensures the dataset's scale, accuracy, and diversity.
114
 
115
+ ## Citation
116
+ If you use the MulSeT dataset in your research, please cite our paper:
117
 
118
+ ```bibtex
119
+ @misc{zhang2025mllms,
120
+ title={Why Do MLLMs Struggle with Spatial Understanding? A Systematic Analysis from Data to Architecture},
121
+ author={Wanyue Zhang, Yibin Huang, Yangbin Xu, JingJing Huang, Helu Zhi, Shuo Ren, Wang Xu, Jiajun Zhang},
122
+ year={2025},
123
+ eprint={2509.02359},
124
+ archivePrefix={arXiv},
125
+ primaryClass={cs.CV}
126
+ }
127
+ ```