Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,30 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
This repository currently hosts a demo version of our dataset to provide a preview. The full dataset is coming soon. Stay tuned!
|
6 |
+
|
7 |
+
## Dataset Description
|
8 |
+
|
9 |
+
To evaluate multi-view spatial understanding, we designed three progressively challenging tasks that require integrating information across images. All tasks are formulated as four-option multiple-choice questions.
|
10 |
+
|
11 |
+
### Tasks
|
12 |
+
|
13 |
+
**Occlusion Restoration** requires understanding the relative positions between views.
|
14 |
+
Given two views of a scene, one object in the second image is masked. The model must identify the occluded object by leveraging information from both views, testing object correspondence in different views.
|
15 |
+
|
16 |
+
**Distance Comparison** requires intuitive spatial understanding.
|
17 |
+
The model is asked to find the object closest to a given reference object (shared across views) based on centroid distance, assessing spatial relation inference.
|
18 |
+
|
19 |
+
**Azimuth Transfer** requires abstract spatial imagination and viewpoint-conditioned spatial reasoning.
|
20 |
+
Assuming an egocentric viewpoint from the first image while facing a reference object, the model must determine the relative direction of a second object in the second view.
|
21 |
+
|
22 |
+
### Construction Pipeline
|
23 |
+
|
24 |
+
Thanks to the controllability of the simulator, we construct the dataset in a fully synthetic environment. Multiple indoor scenes are built in the simulator, where two images are captured per scene from different viewing angles. Each image pair contains both shared and exclusive objects, ensuring that some objects appear in only one of the two views.
|
25 |
+
|
26 |
+
To construct each sample, we first obtain object-level scene metadata, including the 3D positions of objects in the world coordinate system. Based on this, we identify shared and exclusive objects across views. An automated script then traverses all object candidates, applying a visibility filter that retains only objects occupying at least a minimum area ratio (`min_area_ratio`) in the image. Following this, we generate the specific tasks. For **Occlusion Restoration**, we select a shared object and mask it in one view with a black rectangle bordered in red. For **Distance Comparison**, we choose a shared object and use the 3D coordinates of all objects to generate a question-answer pair about which is closest. For **Azimuth Transfer**, we select two exclusive objects and use their coordinates to compute the relative azimuth, enforcing an angular separation constraint (`angle_thresh_deg` >= 15°) to ensure sufficient directional distinction.
|
27 |
+
|
28 |
+
### Dataset Statistics
|
29 |
+
|
30 |
+
Our benchmark contains over 38.2k question-answer pairs spanning more than 5,000 unique 3D scenes, with source imagery from the validation sets of AI2-THOR. For the following experiments, we use 30k data for training and 8.2k data for evaluation.
|