Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
zfj1998 commited on
Commit
8a3465c
·
verified ·
1 Parent(s): e1a16c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -63,16 +63,26 @@ size_categories:
63
  <a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">🤗 Dataset Viewer</a>
64
  </p>
65
 
 
 
 
66
  <div style="text-align: center;">
67
  <img src="task_example.png" alt="" width="650"/>
68
  </div>
69
 
70
- HumanEval-V consists of 253 human-annotated Python coding tasks, each featuring a crucial diagram that provides essential visual context for solving the problem and a predefined function signature that outlines the input-output structure. LMMs are expected to generate code solutions based on the diagram and function signature. To ensure the accuracy of the solutions, each task is accompanied by carefully crafted test cases for execution-based pass@k evaluation.
 
 
 
 
 
 
71
 
72
  <div style="text-align: center;">
73
  <img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
74
  </div>
75
 
 
76
  ## Dataset Structure
77
  Each task in the dataset consists of the following fields:
78
 
 
63
  <a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">🤗 Dataset Viewer</a>
64
  </p>
65
 
66
+ **HumanEval-V** is a novel benchmark designed to evaluate the diagram understanding and reasoning capabilities of Large Multimodal Models (LMMs) in programming contexts. Unlike existing benchmarks, HumanEval-V focuses on coding tasks that require sophisticated visual reasoning over complex diagrams, pushing the boundaries of LMMs' ability to comprehend and process visual information. The dataset includes **253 human-annotated Python coding tasks**, each featuring a critical, self-explanatory diagram with minimal textual clues. These tasks require LMMs to generate Python code based on the visual context and predefined function signatures.
67
+
68
+
69
  <div style="text-align: center;">
70
  <img src="task_example.png" alt="" width="650"/>
71
  </div>
72
 
73
+ ## Key features:
74
+ - **Complex diagram understanding** that is indispensable for solving coding tasks.
75
+ - **Real-world problem contexts** with diverse diagram types and spatial reasoning challenges.
76
+ - **Code generation tasks**, moving beyond multiple-choice or short-answer questions to evaluate deeper visual and logical reasoning capabilities.
77
+ - **Two-stage evaluation pipeline** that separates diagram description generation and code implementation for more accurate visual reasoning assessment.
78
+ - **Handcrafted test cases** for rigorous execution-based evaluation through the **pass@k** metric.
79
+
80
 
81
  <div style="text-align: center;">
82
  <img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
83
  </div>
84
 
85
+
86
  ## Dataset Structure
87
  Each task in the dataset consists of the following fields:
88