Update README.md
Browse files
README.md
CHANGED
@@ -33,13 +33,72 @@ dataset_info:
|
|
33 |
dtype: string
|
34 |
splits:
|
35 |
- name: test
|
36 |
-
num_bytes: 32915902
|
37 |
num_examples: 253
|
38 |
download_size: 32012630
|
39 |
-
dataset_size: 32915902
|
40 |
configs:
|
41 |
- config_name: default
|
42 |
data_files:
|
43 |
- split: test
|
44 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
dtype: string
|
34 |
splits:
|
35 |
- name: test
|
36 |
+
num_bytes: 32915902
|
37 |
num_examples: 253
|
38 |
download_size: 32012630
|
39 |
+
dataset_size: 32915902
|
40 |
configs:
|
41 |
- config_name: default
|
42 |
data_files:
|
43 |
- split: test
|
44 |
path: data/test-*
|
45 |
+
license: apache-2.0
|
46 |
+
task_categories:
|
47 |
+
- question-answering
|
48 |
+
- text-generation
|
49 |
+
language:
|
50 |
+
- en
|
51 |
+
tags:
|
52 |
+
- code
|
53 |
+
size_categories:
|
54 |
+
- n<1K
|
55 |
---
|
56 |
+
|
57 |
+
## HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks
|
58 |
+
<p align="left">
|
59 |
+
<a href="https://arxiv.org/abs/2410.12381">📄 Paper </a> •
|
60 |
+
<a href="https://humaneval-v.github.io">🏠 Home Page</a> •
|
61 |
+
<a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">💻 GitHub Repository </a> •
|
62 |
+
<a href="https://humaneval-v.github.io/#leaderboard">🏆 Leaderboard</a> •
|
63 |
+
<a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">🤗 Dataset Viewer</a>
|
64 |
+
</p>
|
65 |
+
|
66 |
+
<div style="text-align: center;">
|
67 |
+
<img src="task_example.png" alt="" width="650"/>
|
68 |
+
</div>
|
69 |
+
|
70 |
+
HumanEval-V consists of 253 human-annotated Python coding tasks, each featuring a crucial diagram that provides essential visual context for solving the problem and a predefined function signature that outlines the input-output structure. LMMs are expected to generate code solutions based on the diagram and function signature. To ensure the accuracy of the solutions, each task is accompanied by carefully crafted test cases for execution-based pass@k evaluation.
|
71 |
+
|
72 |
+
<div style="text-align: center;">
|
73 |
+
<img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
|
74 |
+
</div>
|
75 |
+
|
76 |
+
## Dataset Structure
|
77 |
+
Each task in the dataset consists of the following fields:
|
78 |
+
|
79 |
+
- **qid**: A unique identifier for each coding task (e.g., _q1_, with mutated versions like _q1-2_, _q1-3_).
|
80 |
+
- **diagram**: A single diagram that provides the essential visual context required to solve the task.
|
81 |
+
- **function_signature**: Includes necessary imports and the function signature that the LMMs must complete.
|
82 |
+
- **test_script**: The test cases used to validate the correctness of the generated code.
|
83 |
+
- **ground_truth_solution**: The human-annotated code solutions for the task.
|
84 |
+
- **ground_truth_diagram_description**: Human-annotated descriptions of the diagram.
|
85 |
+
- **task_type**: The type of the task, which falls into one of six categories, as shown in **Figure 2**.
|
86 |
+
- **capability_aspects**: The capabilities required to understand the diagram in the task, which include seven dimensions and their sub-aspects, as shown in **Figure 3**.
|
87 |
+
|
88 |
+
## Usage
|
89 |
+
You can easily load the dataset using the Hugging Face `datasets` library.
|
90 |
+
|
91 |
+
```python
|
92 |
+
from datasets import load_dataset
|
93 |
+
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test")
|
94 |
+
```
|
95 |
+
|
96 |
+
## Citation
|
97 |
+
```bibtex
|
98 |
+
@article{zhang2024humanevalv,
|
99 |
+
title={HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks},
|
100 |
+
author={Zhang, Fengji and Wu, Linquan and Bai, Huiyu and Lin, Guancheng and Li, Xiao and Yu, Xiao and Wang, Yue and Chen, Bei and Keung, Jacky},
|
101 |
+
journal={arXiv preprint arXiv:2410.12381},
|
102 |
+
year={2024},
|
103 |
+
}
|
104 |
+
```
|