Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
yljblues nielsr HF Staff commited on
Commit
288209e
·
verified ·
1 Parent(s): ef77e52

Update dataset card: task category and license (#2)

Browse files

- Update dataset card: task category and license (429fdbc3b507355f00b2cc01a4036b1c05bc3f51)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +36 -39
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - image-to-text
5
  language:
6
  - en
 
 
 
 
 
7
  tags:
8
  - multimodality
9
  - reasoning
10
- size_categories:
11
- - 1K<n<10K
12
  configs:
13
  - config_name: cube
14
  data_files:
@@ -118,13 +118,12 @@ dataset_info:
118
  dataset_size: 2468355
119
  ---
120
 
121
-
122
  # MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
123
 
124
  [**🌐 Homepage**](https://marble-benchmark.github.io) | [**📖 Paper**](https://arxiv.org/abs/2506.22992) | [**🤗 Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
125
 
126
  ## Introduction
127
- MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE&mdash;all the 12 advanced models obtain near-random performance on M-Portal and 0\% accuracy on M-Cube. Only in simplified subtasks some models outperform the random baseline, indicating that complex reasoning is still a challenge for existing MLLMs. Moreover, we show that perception remains a bottleneck, where MLLMs occasionally fail to extract information from the visual inputs. By shedding a light on the limitations of MLLMs, we hope that MARBLE will spur the development of the next generation of models with the ability to reason and plan across many, multimodal reasoning steps.
128
 
129
  ![Alt text](overview.png)
130
 
@@ -150,40 +149,38 @@ Please refer to [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal
150
 
151
  ## Overall Results
152
  Performance on M-PORTAL:
153
- | Model | Plan-correctness (F1 %) | Fill-the-blanks (Acc %) |
154
- | ------------------ | ----------------------- | ----------------------- |
155
- | GPT-o3 | 6.6 | 17.6 |
156
- | Gemini-2.5-pro | 4.7 | 16.1 |
157
- | DeepSeek-R1-0528\* | 0.0 | 8.4 |
158
- | Claude-3.7-Sonnet | 6.3 | 6.8 |
159
- | DeepSeek-R1\* | 6.1 | 5.5 |
160
- | Seed1.5-VL | 7.6 | 3.5 |
161
- | GPT-o4-mini | 0.0 | 3.1 |
162
- | GPT-4o | 6.5 | 0.4 |
163
- | Llama-4-Scout | 6.5 | 0.2 |
164
- | Qwen2.5-VL-72B | 6.6 | 0.2 |
165
- | InternVL3-78B | 6.4 | 0.0 |
166
- | Qwen3-235B-A22B\* | 0.0 | 0.0 |
167
- | *Random* | *6.1* | *3e-3* |
168
 
169
  Performance on M-CUBE:
170
- | Model | CUBE (Acc %) | CUBE-easy (Acc %) |
171
- | ------------------ | ------------ | ----------------- |
172
- | GPT-o3 | 0.0 | 72.0 |
173
- | GPT-o4-mini | 0.0 | 16.0 |
174
- | DeepSeek-R1\* | 0.0 | 14.0 |
175
- | Gemini-2.5-pro | 0.0 | 11.0 |
176
- | DeepSeek-R1-0528\* | 0.0 | 8.0 |
177
- | Claude-3.7-Sonnet | 0.0 | 7.4 |
178
- | InternVL3-78B | 0.0 | 2.8 |
179
- | Seed1.5-VL | 0.0 | 2.0 |
180
- | GPT-4o | 0.0 | 2.0 |
181
- | Qwen2.5-VL-72B | 0.0 | 2.0 |
182
- | Llama-4-Scout | 0.0 | 1.6 |
183
- | Qwen3-235B-A22B\* | 0.0 | 0.3 |
184
- | *Random* | *1e-5* | *3.1* |
185
-
186
-
187
 
188
  ## Contact
189
  - Yulun Jiang: [email protected]
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: cc-by-nc-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - image-text-to-text
9
  tags:
10
  - multimodality
11
  - reasoning
 
 
12
  configs:
13
  - config_name: cube
14
  data_files:
 
118
  dataset_size: 2468355
119
  ---
120
 
 
121
  # MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
122
 
123
  [**🌐 Homepage**](https://marble-benchmark.github.io) | [**📖 Paper**](https://arxiv.org/abs/2506.22992) | [**🤗 Dataset**](https://huggingface.co/datasets/mrble/MARBLE) | [**🔗 Code**](https://github.com/eth-medical-ai-lab/multimodal-reasoning-bench)
124
 
125
  ## Introduction
126
+ MARBLE is a challenging multimodal reasoning benchmark designed to scrutinize multimodal language models (MLLMs) in their ability to carefully reason step-by-step through complex multimodal problems and environments. MARBLE is composed of two highly challenging tasks, M-Portal and M-Cube, that require the crafting and understanding of multistep plans leveraging spatial, visual, and physical constraints. We find that current MLLMs perform poorly on MARBLE&mdash;all the 12 advanced models obtain near-random performance on M-Portal and 0% accuracy on M-Cube. Only in simplified subtasks some models outperform the random baseline, indicating that complex reasoning is still a challenge for existing MLLMs. Moreover, we show that perception remains a bottleneck, where MLLMs occasionally fail to extract information from the visual inputs. By shedding a light on the limitations of MLLMs, we hope that MARBLE will spur the development of the next generation of models with the ability to reason and plan across many, multimodal reasoning steps.
127
 
128
  ![Alt text](overview.png)
129
 
 
149
 
150
  ## Overall Results
151
  Performance on M-PORTAL:
152
+ | Model | Plan-correctness (F1 %) | Fill-the-blanks (Acc %) |
153
+ | --- | --- | --- |
154
+ | GPT-o3 | 6.6 | 17.6 |
155
+ | Gemini-2.5-pro | 4.7 | 16.1 |
156
+ | DeepSeek-R1-0528\* | 0.0 | 8.4 |
157
+ | Claude-3.7-Sonnet | 6.3 | 6.8 |
158
+ | DeepSeek-R1\* | 6.1 | 5.5 |
159
+ | Seed1.5-VL | 7.6 | 3.5 |
160
+ | GPT-o4-mini | 0.0 | 3.1 |
161
+ | GPT-4o | 6.5 | 0.4 |
162
+ | Llama-4-Scout | 6.5 | 0.2 |
163
+ | Qwen2.5-VL-72B | 6.6 | 0.2 |
164
+ | InternVL3-78B | 6.4 | 0.0 |
165
+ | Qwen3-235B-A22B\* | 0.0 | 0.0 |
166
+ | *Random* | *6.1* | *3e-3* |
167
 
168
  Performance on M-CUBE:
169
+ | Model | CUBE (Acc %) | CUBE-easy (Acc %) |
170
+ | --- | --- | --- |
171
+ | GPT-o3 | 0.0 | 72.0 |
172
+ | GPT-o4-mini | 0.0 | 16.0 |
173
+ | DeepSeek-R1\* | 0.0 | 14.0 |
174
+ | Gemini-2.5-pro | 0.0 | 11.0 |
175
+ | DeepSeek-R1-0528\* | 0.0 | 8.0 |
176
+ | Claude-3.7-Sonnet | 0.0 | 7.4 |
177
+ | InternVL3-78B | 0.0 | 2.8 |
178
+ | Seed1.5-VL | 0.0 | 2.0 |
179
+ | GPT-4o | 0.0 | 2.0 |
180
+ | Qwen2.5-VL-72B | 0.0 | 2.0 |
181
+ | Llama-4-Scout | 0.0 | 1.6 |
182
+ | Qwen3-235B-A22B\* | 0.0 | 0.3 |
183
+ | *Random* | *1e-5* | *3.1* |
 
 
184
 
185
  ## Contact
186
  - Yulun Jiang: [email protected]