Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
zfj1998's picture
Set task category to image-text-to-text (#2)
ecdfc50 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - n<1K
task_categories:
  - image-text-to-text
dataset_info:
  features:
    - name: qid
      dtype: string
    - name: ground_truth_solution
      dtype: string
    - name: ground_truth_diagram_description
      dtype: string
    - name: test_script
      dtype: string
    - name: function_signature
      dtype: string
    - name: diagram
      dtype: image
    - name: capability_aspects
      struct:
        - name: Common Sense
          sequence: string
        - name: Data Structures
          sequence: string
        - name: Dynamic Patterns
          sequence: string
        - name: Geometric Objects
          sequence: string
        - name: Mathematical Operations
          sequence: string
        - name: Spatial Transformations
          sequence: string
        - name: Topological Relations
          sequence: string
    - name: task_type
      dtype: string
  splits:
    - name: test
      num_bytes: 32915902
      num_examples: 253
  download_size: 32012630
  dataset_size: 32915902
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
tags:
  - code

HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks

📄 Paper 🏠 Home Page💻 GitHub Repository 🏆 Leaderboard🤗 Dataset Viewer

HumanEval-V is a novel benchmark designed to evaluate the diagram understanding and reasoning capabilities of Large Multimodal Models (LMMs) in programming contexts. Unlike existing benchmarks, HumanEval-V focuses on coding tasks that require sophisticated visual reasoning over complex diagrams, pushing the boundaries of LMMs' ability to comprehend and process visual information. The dataset includes 253 human-annotated Python coding tasks, each featuring a critical, self-explanatory diagram with minimal textual clues. These tasks require LMMs to generate Python code based on the visual context and predefined function signatures.

Key features:

  • Complex diagram understanding that is indispensable for solving coding tasks.
  • Real-world problem contexts with diverse diagram types and spatial reasoning challenges.
  • Code generation tasks, moving beyond multiple-choice or short-answer questions to evaluate deeper visual and logical reasoning capabilities.
  • Two-stage evaluation pipeline that separates diagram description generation and code implementation for more accurate visual reasoning assessment.
  • Handcrafted test cases for rigorous execution-based evaluation through the pass@k metric.

Dataset Structure

Each task in the dataset consists of the following fields:

  • qid: A unique identifier for each coding task (e.g., q1, with mutated versions like q1-2, q1-3).
  • diagram: A single diagram that provides the essential visual context required to solve the task.
  • function_signature: Includes necessary imports and the function signature that the LMMs must complete.
  • test_script: The test cases used to validate the correctness of the generated code.
  • ground_truth_solution: The human-annotated code solutions for the task.
  • ground_truth_diagram_description: Human-annotated descriptions of the diagram.
  • task_type: The type of the task, which falls into one of six categories, as shown in Figure 2.
  • capability_aspects: The capabilities required to understand the diagram in the task, which include seven dimensions and their sub-aspects, as shown in Figure 3.

Usage

You can easily load the dataset using the Hugging Face datasets library.

from datasets import load_dataset
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test")

Citation

@article{zhang2024humanevalv,
  title={HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks}, 
  author={Zhang, Fengji and Wu, Linquan and Bai, Huiyu and Lin, Guancheng and Li, Xiao and Yu, Xiao and Wang, Yue and Chen, Bei and Keung, Jacky},
  journal={arXiv preprint arXiv:2410.12381},
  year={2024},
}