Nicole-Yi commited on
Commit
beded4a
·
verified ·
1 Parent(s): 5330487

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -1
README.md CHANGED
@@ -9,4 +9,162 @@ tags:
9
  - agent
10
  size_categories:
11
  - n<1K
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - agent
10
  size_categories:
11
  - n<1K
12
+ ---
13
+
14
+ # Dataset Card for **GitTaskBench**
15
+
16
+ ## Dataset Details
17
+
18
+ ### Dataset Description
19
+ **GitTaskBench** is a benchmark dataset designed to evaluate the capabilities of code-based intelligent agents in solving real-world tasks by leveraging GitHub repositories.
20
+ It contains **54 representative tasks** across **7 domains**, carefully curated to reflect real-world complexity and economic value. Each task is associated with a fixed GitHub repository to ensure reproducibility and fairness in evaluation.
21
+
22
+ - **Curated by:** QuantaAlpha Research Team
23
+ - **Funded by [optional]:** Not specified
24
+ - **Shared by [optional]:** GitTaskBench Team
25
+ - **Language(s):** Primarily English (task descriptions, documentation)
26
+ - **License:** [Specify license chosen, e.g., `cc-by-nc-sa-4.0`]
27
+
28
+ ### Dataset Sources
29
+ - **Repository:** [GitTaskBench GitHub](https://github.com/QuantaAlpha/GitTaskBench)
30
+ - **Paper:** [arXiv:2508.18993](https://arxiv.org/abs/2508.18993)
31
+ - **Organization:** [Team Homepage](https://quantaalpha.github.io)
32
+
33
+ ---
34
+
35
+ ## Uses
36
+
37
+ ### Direct Use
38
+ - Evaluating LLM-based agents (e.g., RepoMaster, SWE-Agent, Aider, OpenHands).
39
+ - Benchmarking repository-level reasoning and execution.
40
+ - Training/testing frameworks for real-world software engineering tasks.
41
+
42
+ ### Out-of-Scope Use
43
+ - Not intended for personal data processing.
44
+ - Not designed as a dataset for training NLP models directly.
45
+ - Not suitable for commercial applications requiring private/sensitive datasets.
46
+
47
+ ---
48
+
49
+ ## Dataset Structure
50
+
51
+ - **Tasks:** 54 total, spanning 7 domains.
52
+ - **Domains include:**
53
+ - Image Processing
54
+ - Video Processing
55
+ - Speech Processing
56
+ - Physiological Signals Processing
57
+ - Security and Privacy
58
+ - Web Scraping
59
+ - Office Document Processing
60
+
61
+ Each task specifies:
62
+ - Input requirements (file types, formats).
63
+ - Output expectations.
64
+ - Evaluation metrics (task-specific, e.g., accuracy thresholds, PSNR for image quality, Hasler-Bülthoff metric for video).
65
+
66
+ ---
67
+
68
+ ## Usage Example
69
+
70
+ You can easily load the dataset using the 🤗 Datasets library:
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+
75
+ # Load the full dataset
76
+ dataset = load_dataset("Nicole-Yi/GitTaskBench")
77
+
78
+ # Inspect the dataset structure
79
+ print(dataset)
80
+
81
+ # Access one task example
82
+ print(dataset["test"][0])
83
+ ```
84
+
85
+ ### Example Output
86
+ ```
87
+ DatasetDict({
88
+ train: Dataset({
89
+ features: ['task_id', 'domain', 'description', 'input_format', 'output_requirement', 'evaluation_metric'],
90
+ num_rows: 54
91
+ })
92
+ })
93
+ ```
94
+
95
+ Each task entry contains:
96
+ - **task_id**: Unique task identifier (e.g., `Trafilatura_01`)
97
+ - **domain**: Task domain (e.g., Image Processing, Speech Processing, etc.)
98
+ - **description**: Natural language description of the task
99
+ - **input_format**: Expected input file type/format
100
+ - **output_requirement**: Required output specification
101
+ - **evaluation_metric**: Evaluation protocol and pass/fail criteria
102
+ ```
103
+
104
+
105
+ ## Dataset Creation
106
+
107
+ ### Curation Rationale
108
+ Current agent benchmarks often lack real-world grounding. GitTaskBench fills this gap by focusing on **practical, repository-driven tasks** that mirror how developers solve real problems using GitHub projects.
109
+
110
+ ### Source Data
111
+
112
+ #### Data Collection and Processing
113
+ - Selected **GitHub repositories** that match strict criteria (stability, completeness, reproducibility).
114
+ - Curated real-world tasks mapped to fixed repositories.
115
+ - Defined consistent evaluation protocols across tasks.
116
+
117
+ #### Who are the source data producers?
118
+ - Source repositories come from **open-source GitHub projects**.
119
+ - Benchmark curated by QuantaAlpha team (researchers from CAS, Tsinghua, PKU, CMU, HKUST, etc.).
120
+
121
+ ### Annotations
122
+ - Task-specific evaluation metrics are provided as annotations.
123
+ - No human-labeled data annotations beyond benchmark definitions.
124
+
125
+ #### Personal and Sensitive Information
126
+ - Dataset does **not** include personally identifiable information.
127
+ - Repositories selected exclude sensitive or private data.
128
+
129
+ ---
130
+
131
+ ## Bias, Risks, and Limitations
132
+ - **Bias:** Repository and task selection may reflect research biases toward specific domains.
133
+ - **Risk:** Benchmark assumes GitHub accessibility; tasks may be less relevant if repos change in future.
134
+ - **Limitation:** Tasks are curated and fixed; not all real-world cases are covered.
135
+
136
+ ### Recommendations
137
+ - Use this benchmarks for agent real-world evaluation.
138
+ - Ensure compliance with licensing before re-distribution.
139
+
140
+ ---
141
+
142
+ ## Citation
143
+ If you use GitTaskBench, please cite the paper:
144
+
145
+ **BibTeX:**
146
+ ```bibtex
147
+ @misc{ni2025gittaskbench,
148
+ title={GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging},
149
+ author={Ziyi Ni and Huacan Wang and Shuo Zhang and Shuo Lu and Ziyang He and Wang You and Zhenheng Tang and Yuntao Du and Bill Sun and Hongzhang Liu and Sen Hu and Ronghao Chen and Bo Li and Xin Li and Chen Hu and Binxing Jiao and Daxin Jiang and Pin Lyu},
150
+ year={2025},
151
+ eprint={2508.18993},
152
+ archivePrefix={arXiv},
153
+ primaryClass={cs.SE},
154
+ url={https://arxiv.org/abs/2508.18993},
155
+ }
156
+ ```
157
+
158
+ ---
159
+
160
+ ## More Information
161
+ - **Maintainer:** QuantaAlpha Research Team
162
+ - **Contact:** See [GitTaskBench GitHub Issues](https://github.com/QuantaAlpha/GitTaskBench/issues)
163
+
164
+ ---
165
+
166
+ ✨ **Key Features**:
167
+ - Multi-modal tasks (vision, speech, text, signals).
168
+ - Repository-level evaluation.
169
+ - Real-world relevance (PDF extraction, video coloring, speech analysis, etc.).
170
+ - Extensible design for new tasks.