Datasets:
File size: 6,110 Bytes
973e574 288209e 973e574 f2da001 973e574 ca16b35 45fa18b 05fe8ba 213e4e9 05fe8ba 95fc44a f2da001 ca16b35 f2da001 ca16b35 45fa18b c950799 45fa18b c950799 45fa18b c950799 45fa18b c950799 213e4e9 95fc44a f2da001 5dfade0 288209e 5dfade0 288209e 5dfade0 288209e 5dfade0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-text-to-text
tags:
- multimodality
- reasoning
configs:
- config_name: cube
data_files:
- split: train
path: cube/train-*
- config_name: cube_easy
data_files:
- split: train
path: cube_easy/train-*
- config_name: cube_perception
data_files:
- split: train
path: cube_perception/train-*
- config_name: portal_binary
data_files:
- split: train
path: portal_binary/train-*
- config_name: portal_blanks
data_files:
- split: train
path: portal_blanks/train-*
dataset_info:
- config_name: cube
features:
- name: image
dtype: image
- name: face_arrays
sequence:
sequence:
sequence: int64
- name: question
dtype: string
- name: reference_solution
dtype: string
splits:
- name: train
num_bytes: 274367533.0
num_examples: 1000
download_size: 270371502
dataset_size: 274367533.0
- config_name: cube_easy
features:
- name: question
dtype: string
- name: face_arrays
sequence:
sequence:
sequence: int64
- name: reference_solution
dtype: string
splits:
- name: train
num_bytes: 3940912
num_examples: 1000
download_size: 394861
dataset_size: 3940912
- config_name: cube_perception
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7924582.0
num_examples: 200
download_size: 7775024
dataset_size: 7924582.0
- config_name: portal_binary
features:
- name: map_name
dtype: string
- name: images
sequence: string
- name: system_prompt
dtype: string
- name: user_prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2051248
num_examples: 512
download_size: 160438
dataset_size: 2051248
- config_name: portal_blanks
features:
- name: map_name
dtype: string
- name: images
sequence: string
- name: system_prompt
dtype: string
- name: user_prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2468355
num_examples: 512
download_size: 305275
dataset_size: 2468355
---
# MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
[**🌐 Homepage**](https://marble-benchmark.github.io) | [**📖 Paper**](https://arxiv.org/abs/2506.22992) | [**🤗 Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
## Introduction
MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE—all the 12 advanced models obtain near-random performance on M-Portal and 0% accuracy on M-Cube. Only in simplified subtasks some models outperform the random baseline, indicating that complex reasoning is still a challenge for existing MLLMs. Moreover, we show that perception remains a bottleneck, where MLLMs occasionally fail to extract information from the visual inputs. By shedding a light on the limitations of MLLMs, we hope that MARBLE will spur the development of the next generation of models with the ability to reason and plan across many, multimodal reasoning steps.

## Dataset Details
The benchmark consists of two datasets M-Portal and M-CUBE, each also contains 2 subtasks respectively (`portal_binary` and `portal_blanks` for M-PORTAL and `cube` and `cube_easy` for M-CUBE). Besides, M-CUBE also contains a simple perception tasks `cube_perception`.
- M-PORTAL: multi-step spatial-planning puzzles modelled on levels from Portal 2.
- `map_name`: Portal 2 map name.
- `images`: images for each map ([images.zip](https://huggingface.co/datasets/mrble/MARBLE/blob/main/images.zip)).
- `system_prompt` and `user_prompt`: instruction of the problem.
- `answer`: solution.
- M-CUBE: 3D Cube assemblies from six jigsaw pieces, inspired by Happy Cube puzzles.
- `image`: image of 6 jigsaw pieces.
- `face_arrays`: 6 jigsaw pieces converted to binary arrays (0=gap, 1=bump).
- `question`: instruction of the Happy Cube Puzzle.
- `reference_solution`: one of the valid solutions.
## Evaluation
Please refer to [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
## Overall Results
Performance on M-PORTAL:
| Model | Plan-correctness (F1 %) | Fill-the-blanks (Acc %) |
| --- | --- | --- |
| GPT-o3 | 6.6 | 17.6 |
| Gemini-2.5-pro | 4.7 | 16.1 |
| DeepSeek-R1-0528\* | 0.0 | 8.4 |
| Claude-3.7-Sonnet | 6.3 | 6.8 |
| DeepSeek-R1\* | 6.1 | 5.5 |
| Seed1.5-VL | 7.6 | 3.5 |
| GPT-o4-mini | 0.0 | 3.1 |
| GPT-4o | 6.5 | 0.4 |
| Llama-4-Scout | 6.5 | 0.2 |
| Qwen2.5-VL-72B | 6.6 | 0.2 |
| InternVL3-78B | 6.4 | 0.0 |
| Qwen3-235B-A22B\* | 0.0 | 0.0 |
| *Random* | *6.1* | *3e-3* |
Performance on M-CUBE:
| Model | CUBE (Acc %) | CUBE-easy (Acc %) |
| --- | --- | --- |
| GPT-o3 | 0.0 | 72.0 |
| GPT-o4-mini | 0.0 | 16.0 |
| DeepSeek-R1\* | 0.0 | 14.0 |
| Gemini-2.5-pro | 0.0 | 11.0 |
| DeepSeek-R1-0528\* | 0.0 | 8.0 |
| Claude-3.7-Sonnet | 0.0 | 7.4 |
| InternVL3-78B | 0.0 | 2.8 |
| Seed1.5-VL | 0.0 | 2.0 |
| GPT-4o | 0.0 | 2.0 |
| Qwen2.5-VL-72B | 0.0 | 2.0 |
| Llama-4-Scout | 0.0 | 1.6 |
| Qwen3-235B-A22B\* | 0.0 | 0.3 |
| *Random* | *1e-5* | *3.1* |
## Contact
- Yulun Jiang: [email protected]
## BibTex
```bibtex
@article{jiang2025marble,
title={MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning},
author={Jiang, Yulun and Chai, Yekun and Brbi'c, Maria and Moor, Michael},
journal={arXiv preprint arXiv:2506.22992},
year={2025},
url={https://arxiv.org/abs/2506.22992}
}
``` |