Update README.md
Browse files
README.md
CHANGED
@@ -63,16 +63,26 @@ size_categories:
|
|
63 |
<a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">🤗 Dataset Viewer</a>
|
64 |
</p>
|
65 |
|
|
|
|
|
|
|
66 |
<div style="text-align: center;">
|
67 |
<img src="task_example.png" alt="" width="650"/>
|
68 |
</div>
|
69 |
|
70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
<div style="text-align: center;">
|
73 |
<img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
|
74 |
</div>
|
75 |
|
|
|
76 |
## Dataset Structure
|
77 |
Each task in the dataset consists of the following fields:
|
78 |
|
|
|
63 |
<a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">🤗 Dataset Viewer</a>
|
64 |
</p>
|
65 |
|
66 |
+
**HumanEval-V** is a novel benchmark designed to evaluate the diagram understanding and reasoning capabilities of Large Multimodal Models (LMMs) in programming contexts. Unlike existing benchmarks, HumanEval-V focuses on coding tasks that require sophisticated visual reasoning over complex diagrams, pushing the boundaries of LMMs' ability to comprehend and process visual information. The dataset includes **253 human-annotated Python coding tasks**, each featuring a critical, self-explanatory diagram with minimal textual clues. These tasks require LMMs to generate Python code based on the visual context and predefined function signatures.
|
67 |
+
|
68 |
+
|
69 |
<div style="text-align: center;">
|
70 |
<img src="task_example.png" alt="" width="650"/>
|
71 |
</div>
|
72 |
|
73 |
+
## Key features:
|
74 |
+
- **Complex diagram understanding** that is indispensable for solving coding tasks.
|
75 |
+
- **Real-world problem contexts** with diverse diagram types and spatial reasoning challenges.
|
76 |
+
- **Code generation tasks**, moving beyond multiple-choice or short-answer questions to evaluate deeper visual and logical reasoning capabilities.
|
77 |
+
- **Two-stage evaluation pipeline** that separates diagram description generation and code implementation for more accurate visual reasoning assessment.
|
78 |
+
- **Handcrafted test cases** for rigorous execution-based evaluation through the **pass@k** metric.
|
79 |
+
|
80 |
|
81 |
<div style="text-align: center;">
|
82 |
<img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
|
83 |
</div>
|
84 |
|
85 |
+
|
86 |
## Dataset Structure
|
87 |
Each task in the dataset consists of the following fields:
|
88 |
|