Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -117,3 +117,85 @@ dataset_info:
|
|
117 |
download_size: 305275
|
118 |
dataset_size: 2468355
|
119 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
download_size: 305275
|
118 |
dataset_size: 2468355
|
119 |
---
|
120 |
+
|
121 |
+
|
122 |
+
# MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
|
123 |
+
|
124 |
+
[**🌐 Homepage**](https://marble-benchmark.github.io) | [**📖 Paper**](https://arxiv.org/abs/2506.22992) | [**🤗 Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
|
125 |
+
|
126 |
+
## Introduction
|
127 |
+
MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE—all the 12 advanced models obtain near-random performance on M-Portal and 0\% accuracy on M-Cube. Only in simplified subtasks some models outperform the random baseline, indicating that complex reasoning is still a challenge for existing MLLMs. Moreover, we show that perception remains a bottleneck, where MLLMs occasionally fail to extract information from the visual inputs. By shedding a light on the limitations of MLLMs, we hope that MARBLE will spur the development of the next generation of models with the ability to reason and plan across many, multimodal reasoning steps.
|
128 |
+
|
129 |
+

|
130 |
+
|
131 |
+
## Dataset Details
|
132 |
+
The benchmark consists of two datasets M-Portal and M-CUBE, each also contains 2 subtasks respectively (`portal_binary` and `portal_blanks` for M-PORTAL and `cube` and `cube_easy` for M-CUBE). Besides, M-CUBE also contains a simple perception tasks `cube_perception`.
|
133 |
+
|
134 |
+
- M-PORTAL: multi-step spatial-planning puzzles modelled on levels from Portal 2.
|
135 |
+
|
136 |
+
- `map_name`: Portal 2 map name.
|
137 |
+
- `images`: images for each map ([images.zip](https://huggingface.co/datasets/mrble/MARBLE/blob/main/images.zip)).
|
138 |
+
- `system_prompt` and `user_prompt`: instruction of the problem.
|
139 |
+
- `answer`: solution.
|
140 |
+
|
141 |
+
- M-CUBE: 3D Cube assemblies from six jigsaw pieces, inspired by Happy Cube puzzles.
|
142 |
+
|
143 |
+
- `image`: image of 6 jigsaw pieces.
|
144 |
+
- `face_arrays`: 6 jigsaw pieces converted to binary arrays (0=gap, 1=bump).
|
145 |
+
- `question`: instruction of the Happy Cube Puzzle.
|
146 |
+
- `reference_solution`: one of the valid solutions.
|
147 |
+
|
148 |
+
## Evaluation
|
149 |
+
Please refer to [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
|
150 |
+
|
151 |
+
## Overall Results
|
152 |
+
Performance on M-PORTAL:
|
153 |
+
| Model | Plan-correctness (F1 %) | Fill-the-blanks (Acc %) |
|
154 |
+
| ------------------ | ----------------------- | ----------------------- |
|
155 |
+
| GPT-o3 | 6.6 | 17.6 |
|
156 |
+
| Gemini-2.5-pro | 4.7 | 16.1 |
|
157 |
+
| DeepSeek-R1-0528\* | 0.0 | 8.4 |
|
158 |
+
| Claude-3.7-Sonnet | 6.3 | 6.8 |
|
159 |
+
| DeepSeek-R1\* | 6.1 | 5.5 |
|
160 |
+
| Seed1.5-VL | 7.6 | 3.5 |
|
161 |
+
| GPT-o4-mini | 0.0 | 3.1 |
|
162 |
+
| GPT-4o | 6.5 | 0.4 |
|
163 |
+
| Llama-4-Scout | 6.5 | 0.2 |
|
164 |
+
| Qwen2.5-VL-72B | 6.6 | 0.2 |
|
165 |
+
| InternVL3-78B | 6.4 | 0.0 |
|
166 |
+
| Qwen3-235B-A22B\* | 0.0 | 0.0 |
|
167 |
+
| *Random* | *6.1* | *3e-3* |
|
168 |
+
|
169 |
+
Performance on M-CUBE:
|
170 |
+
| Model | CUBE (Acc %) | CUBE-easy (Acc %) |
|
171 |
+
| ------------------ | ------------ | ----------------- |
|
172 |
+
| GPT-o3 | 0.0 | 72.0 |
|
173 |
+
| GPT-o4-mini | 0.0 | 16.0 |
|
174 |
+
| DeepSeek-R1\* | 0.0 | 14.0 |
|
175 |
+
| Gemini-2.5-pro | 0.0 | 11.0 |
|
176 |
+
| DeepSeek-R1-0528\* | 0.0 | 8.0 |
|
177 |
+
| Claude-3.7-Sonnet | 0.0 | 7.4 |
|
178 |
+
| InternVL3-78B | 0.0 | 2.8 |
|
179 |
+
| Seed1.5-VL | 0.0 | 2.0 |
|
180 |
+
| GPT-4o | 0.0 | 2.0 |
|
181 |
+
| Qwen2.5-VL-72B | 0.0 | 2.0 |
|
182 |
+
| Llama-4-Scout | 0.0 | 1.6 |
|
183 |
+
| Qwen3-235B-A22B\* | 0.0 | 0.3 |
|
184 |
+
| *Random* | *1e-5* | *3.1* |
|
185 |
+
|
186 |
+
|
187 |
+
|
188 |
+
## Contact
|
189 |
+
- Yulun Jiang: [email protected]
|
190 |
+
|
191 |
+
## BibTex
|
192 |
+
|
193 |
+
```bibtex
|
194 |
+
@article{jiang2025marble,
|
195 |
+
title={MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning},
|
196 |
+
author={Jiang, Yulun and Chai, Yekun and Brbi'c, Maria and Moor, Michael},
|
197 |
+
journal={arXiv preprint arXiv:2506.22992},
|
198 |
+
year={2025},
|
199 |
+
url={https://arxiv.org/abs/2506.22992}
|
200 |
+
}
|
201 |
+
```
|