Update README.md
Browse files
README.md
CHANGED
@@ -14,22 +14,21 @@ size_categories:
|
|
14 |
license: apache-2.0
|
15 |
pretty_name: mmiq
|
16 |
---
|
17 |
-
# Dataset Card for "
|
18 |
-
|
19 |
-
- [Dataset Description](https://huggingface.co/datasets/huanqia/
|
20 |
-
- [Paper Information](https://huggingface.co/datasets/huanqia/
|
21 |
-
- [Dataset Examples](https://huggingface.co/datasets/huanqia/
|
22 |
-
- [Leaderboard](https://huggingface.co/datasets/huanqia/
|
23 |
-
- [Dataset Usage](https://huggingface.co/datasets/huanqia/
|
24 |
-
- [Data Downloading](https://huggingface.co/datasets/huanqia/
|
25 |
-
- [Data Format](https://huggingface.co/datasets/huanqia/
|
26 |
-
- [Automatic Evaluation](https://huggingface.co/datasets/huanqia/
|
27 |
-
- [
|
28 |
-
- [Citation](https://huggingface.co/datasets/huanqia/MMIQ/blob/main/README.md#citation)
|
29 |
|
30 |
## Dataset Description
|
31 |
|
32 |
-
**
|
33 |
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />
|
34 |
|
35 |
|
@@ -44,7 +43,7 @@ pretty_name: mmiq
|
|
44 |
|
45 |
## Dataset Examples
|
46 |
|
47 |
-
Examples of our
|
48 |
1. Logical Operation Reasoning
|
49 |
|
50 |
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
|
@@ -87,7 +86,7 @@ Examples of our MMIQ:
|
|
87 |
|
88 |
## Leaderboard
|
89 |
|
90 |
-
🏆 The leaderboard for the *
|
91 |
|
92 |
|
93 |
## Dataset Usage
|
@@ -100,13 +99,13 @@ You can download this dataset by the following command (make sure that you have
|
|
100 |
```python
|
101 |
from datasets import load_dataset
|
102 |
|
103 |
-
dataset = load_dataset("huanqia/
|
104 |
```
|
105 |
|
106 |
Here are some examples of how to access the downloaded dataset:
|
107 |
|
108 |
```python
|
109 |
-
# print the first example on the
|
110 |
print(dataset[0])
|
111 |
print(dataset[0]['data_id']) # print the problem id
|
112 |
print(dataset[0]['question']) # print the question text
|
@@ -135,10 +134,10 @@ The dataset is provided in json format and contains the following attributes:
|
|
135 |
|
136 |
## Citation
|
137 |
|
138 |
-
If you use the **
|
139 |
```
|
140 |
-
@misc{
|
141 |
-
title = {
|
142 |
author = {Huanqia, Cai and Yijun Yang and Winston Hu},
|
143 |
month = {January},
|
144 |
year = {2025}
|
|
|
14 |
license: apache-2.0
|
15 |
pretty_name: mmiq
|
16 |
---
|
17 |
+
# Dataset Card for "MM-IQ"
|
18 |
+
|
19 |
+
- [Dataset Description](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
|
20 |
+
- [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
|
21 |
+
- [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
|
22 |
+
- [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
|
23 |
+
- [Dataset Usage](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-usage)
|
24 |
+
- [Data Downloading](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-downloading)
|
25 |
+
- [Data Format](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-format)
|
26 |
+
- [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
|
27 |
+
- [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)
|
|
|
28 |
|
29 |
## Dataset Description
|
30 |
|
31 |
+
**MM-IQ** is a new benchmark designed to evaluate MLLMs' intelligence through multiple reasoning patterns demanding abstract reasoning abilities. It encompasses **three input formats, six problem configurations, and eight reasoning patterns**. With **2,710 samples**, MM-IQ is the most comprehensive and largest AVR benchmark for evaluating the intelligence of MLLMs, and **3x and 10x** larger than two very recent benchmarks MARVEL and MathVista-IQTest, respectively. By focusing on AVR problems, MM-IQ provides a targeted assessment of the cognitive capabilities and intelligence of MLLMs, contributing to a more comprehensive understanding of their strengths and limitations in the pursuit of AGI.
|
32 |
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />
|
33 |
|
34 |
|
|
|
43 |
|
44 |
## Dataset Examples
|
45 |
|
46 |
+
Examples of our MM-IQ:
|
47 |
1. Logical Operation Reasoning
|
48 |
|
49 |
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
|
|
|
86 |
|
87 |
## Leaderboard
|
88 |
|
89 |
+
🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard).
|
90 |
|
91 |
|
92 |
## Dataset Usage
|
|
|
99 |
```python
|
100 |
from datasets import load_dataset
|
101 |
|
102 |
+
dataset = load_dataset("huanqia/MM-IQ")
|
103 |
```
|
104 |
|
105 |
Here are some examples of how to access the downloaded dataset:
|
106 |
|
107 |
```python
|
108 |
+
# print the first example on the MM-IQ dataset
|
109 |
print(dataset[0])
|
110 |
print(dataset[0]['data_id']) # print the problem id
|
111 |
print(dataset[0]['question']) # print the question text
|
|
|
134 |
|
135 |
## Citation
|
136 |
|
137 |
+
If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX:
|
138 |
```
|
139 |
+
@misc{cai2025mm-iq,
|
140 |
+
title = {MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models},
|
141 |
author = {Huanqia, Cai and Yijun Yang and Winston Hu},
|
142 |
month = {January},
|
143 |
year = {2025}
|