The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MulSeT: A Benchmark for Multi-view Spatial Understanding Tasks
Paper: Why Do MLLMs Struggle with Spatial Understanding? A Systematic Analysis from Data to Architecture
Code: https://github.com/WanyueZhang-ai/spatial-understanding
A high-level overview of the MulSeT benchmark. The dataset challenges models to integrate information from two distinct viewpoints of a 3D scene to answer spatial reasoning questions.
Dataset Summary
MulSeT is a comprehensive benchmark designed to evaluate the multi-view spatial understanding capabilities of Multimodal Large Language Models (MLLMs). The core challenge lies in integrating visual information from two different viewpoints of a 3D scene to answer complex spatial questions.
All tasks are formulated as four-option multiple-choice questions, requiring models to perform sophisticated reasoning beyond simple object recognition. The dataset is synthetically generated using the AI2-THOR and replica_cad, allowing for precise control over scene composition, object placement, and viewpoint selection.
Supported Tasks
The dataset is structured around three progressively challenging tasks:
1. Occlusion Restoration: This task tests a model's ability to establish object correspondence across views. Given two images of a scene, an object is masked in the second image. The model must identify the occluded object by leveraging the context from the first, unoccluded view.
2. Distance Comparison: This task assesses a model's capacity for spatial relation inference. The model is presented with a reference object and must determine which of the other objects in the scene is closest to it, based on their 3D centroid distances.
3. Azimuth Transfer: This is the most challenging task, requiring abstract spatial imagination and viewpoint transformation. The model must adopt an egocentric perspective from the first image, facing a reference object, and then determine the relative direction (e.g., "left front", "right rear") of a target object visible in the second image.
Dataset Structure
The dataset is divided into train
and test
splits, with each split containing data for the three tasks in separate .jsonl
files.
Data Splits
The dataset contains a total of 106,814 question-answer pairs. (In our experiments, we used 10,000 data points for each task as the training set, rather than the full training set. For testing, the entire test set was utilized. Therefore, a total of 38,174 data points were used in our paper.)
Split | Task | Number of Samples |
---|---|---|
Occlusion Restoration (merged_mask_questions.jsonl ) |
47,621 | |
Train | Distance Comparison (merged_distance_questions.jsonl ) |
15,907 |
Azimuth Transfer (merged_direction_questions.jsonl ) |
35,092 | |
Train Total | 98,620 | |
Occlusion Restoration (merged_mask_questions.jsonl ) |
3,453 | |
Test | Distance Comparison (merged_distance_questions.jsonl ) |
2,930 |
Azimuth Transfer (merged_direction_questions.jsonl ) |
1,791 | |
Test Total | 8,174 |
The test set is constructed from 4 distinct scenes, each with 20 different object arrangement stages, ensuring a robust evaluation of model generalization.
Data Fields
Each line in the .jsonl
files represents a single data instance. While the exact fields vary slightly by task, the core structure is consistent.
Common Fields:
id
: A unique identifier for the sample.query
: (string) The multiple-choice question presented to the model.answer
: (string) The correct option's letter (e.g., "A", "B").image_1
: (string) The file path to the first image.image_2
: (string) The file path to the second image.options
: (list of strings/dicts) The list of possible choices for the question.folder_dir
: (string) The directory containing the associated images for this sample.
Task-Specific Fields:
- Occlusion Restoration:
object_name
: The name of the masked object.mask_image
: The file path to the second image with the object masked.
- Distance Comparison:
src_object
: The reference object for establishing the viewpoint direction.tgt_object
: The target object whose relative direction is being queried.
- Azimuth Transfer:
src_object
: The reference object for establishing the viewpoint direction.tgt_object
: The target object whose relative direction is being queried.src_com
,tgt_com
,agent_pos
: Dictionaries containing the 3D world coordinates (x, y, z) of the source object, target object, and the agent's position, respectively.
Data Instance Example
Below is an example from the Occlusion Restoration task (merged_mask_questions.jsonl
):
{
"id": 1,
"folder_dir": "train453/0",
"pair_name": "0-1:Fridge|2|1",
"query": "Please combine the information from the two images and answer the following question:\nWhat is the object being masked in the second image?\nNote: The masked region is indicated by the black rectangle outlined in red in the second image.\nChoose the correct option from the list below. Only respond with the corresponding letter.\n\nA. Book\nB. Fridge\nC. Chair\nD. CreditCard",
"answer": "B",
"object_name": "Fridge",
"image_1": "train453/0/rgb_0.png",
"image_2": "train453/0/rgb_1.png",
"mask_image": "train453/0/rgb_1_mask_Fridge|2|1.png",
"options": [
"Book",
"Fridge",
"Chair",
"CreditCard"
]
}
Construction Pipeline
The dataset was constructed in a fully synthetic environment with source imagery from the validation sets of AI2THOR and replica_cad. For each sample, two images of an indoor scene are captured from different camera poses. The scene is carefully curated to include a mix of objects visible in both views (shared) and objects visible in only one (exclusive).
An automated pipeline was developed to generate the tasks. This pipeline leverages object-level metadata, such as 3D positions and visibility masks, to create valid and challenging question-answer pairs. For instance, in the Azimuth Transfer task, an angular separation constraint (angle_thresh_deg >= 15°) is enforced to ensure that the directional choices are distinct and unambiguous. This programmatic approach ensures the dataset's scale, accuracy, and diversity.
Citation
If you use the MulSeT dataset in your research, please cite our paper:
@misc{zhang2025mllms,
title={Why Do MLLMs Struggle with Spatial Understanding? A Systematic Analysis from Data to Architecture},
author={Wanyue Zhang, Yibin Huang, Yangbin Xu, JingJing Huang, Helu Zhi, Shuo Ren, Wang Xu, Jiajun Zhang},
year={2025},
eprint={2509.02359},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 5,170