File size: 9,169 Bytes
e781a9e b201a12 e781a9e b201a12 e781a9e b201a12 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
---
license: mit
task_categories:
- text-classification
- feature-extraction
language:
- en
tags:
- software-engineering
- testing
- performance
- llm-serving
- vllm
- benchmarking
- ml-evaluation
pretty_name: vLLM PR Test Classification
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/*
---
# vLLM PR Test Classification Dataset
## 🎯 Overview
This dataset contains **98 vLLM project commits** with their corresponding Pull Request (PR) timeline data and comprehensive test type classifications. It provides insights into testing patterns in a major LLM serving infrastructure project.
## 📊 Dataset Description
### Purpose
This dataset was created by analyzing vLLM project PR timelines to:
- Identify different types of testing and benchmarking activities
- Understand testing patterns in LLM infrastructure development
- Provide labeled data for ML models to classify test types in software PRs
- Enable research on performance optimization trends in LLM serving
### Test Categories
Each commit is classified across four test categories:
| Category | Description | Keywords | Prevalence |
|----------|-------------|----------|------------|
| **LM Evaluation** | Language model evaluation tests | `lm_eval`, `gsm8k`, `mmlu`, `hellaswag`, `truthfulqa` | 25.5% |
| **Performance** | Performance benchmarking tests | `TTFT`, `throughput`, `latency`, `ITL`, `TPOT`, `tok/s` | 81.6% |
| **Serving** | Serving functionality tests | `vllm serve`, `API server`, `frontend`, `online serving` | 53.1% |
| **General Test** | General testing activities | `CI`, `pytest`, `unittest`, `buildkite`, `fastcheck` | 96.9% |
## 📈 Dataset Statistics
### Overall Distribution
- **Total commits**: 98
- **Multi-category commits**: 76 (77.6%)
- **Average test types per commit**: 2.57
### Detailed Keyword Frequency
#### Top Performance Keywords (80 commits)
- `throughput`: 241 mentions
- `latency`: 191 mentions
- `profiling`: 114 mentions
- `TTFT` (Time To First Token): 114 mentions
- `ITL` (Inter-token Latency): 114 mentions
- `TPOT` (Time Per Output Token): 108 mentions
- `optimization`: 87 mentions
- `tok/s` (tokens per second): 66 mentions
#### Top LM Evaluation Keywords (25 commits)
- `gsm8k`: 62 mentions
- `lm_eval`: 33 mentions
- `lm-eval`: 9 mentions
- `mmlu`: 3 mentions
- `humaneval`: 1 mention
#### Top Serving Keywords (52 commits)
- `frontend`: 181 mentions
- `serving`: 74 mentions
- `api server`: 42 mentions
- `vllm serve`: 23 mentions
- `online serving`: 19 mentions
## 🗂️ Data Schema
```python
{
'commit_hash': str, # Git commit SHA-1 hash (40 chars)
'pr_url': str, # GitHub PR URL (e.g., https://github.com/vllm-project/vllm/pull/12601)
'has_lm_eval': bool, # True if commit contains LM evaluation tests
'has_performance': bool, # True if commit contains performance benchmarks
'has_serving': bool, # True if commit contains serving tests
'has_general_test': bool, # True if commit contains general tests
'test_details': str, # Pipe-separated test keywords (e.g., "PERF: ttft, throughput | TEST: ci, pytest")
'timeline_text': str, # Full PR timeline text with comments, reviews, and commit messages
'extracted_at': str # ISO timestamp when data was extracted
}
```
## 💻 Usage Examples
### Basic Loading
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("your-username/vllm-pr-test-classification")
# Explore the data
print(f"Total examples: {len(dataset['train'])}")
print(f"Features: {dataset['train'].features}")
print(f"First example: {dataset['train'][0]}")
```
### Filtering Examples
```python
# Get commits with performance benchmarks
perf_commits = dataset['train'].filter(lambda x: x['has_performance'])
print(f"Performance commits: {len(perf_commits)}")
# Get commits with LM evaluation
lm_eval_commits = dataset['train'].filter(lambda x: x['has_lm_eval'])
print(f"LM evaluation commits: {len(lm_eval_commits)}")
# Get commits with multiple test types
multi_test = dataset['train'].filter(
lambda x: sum([x['has_lm_eval'], x['has_performance'],
x['has_serving'], x['has_general_test']]) >= 3
)
print(f"Commits with 3+ test types: {len(multi_test)}")
```
### Analysis Example
```python
import pandas as pd
# Convert to pandas for analysis
df = dataset['train'].to_pandas()
# Analyze test type combinations
test_combinations = df[['has_lm_eval', 'has_performance', 'has_serving', 'has_general_test']]
combination_counts = test_combinations.value_counts()
print("Most common test combinations:")
print(combination_counts.head())
# Extract performance metrics mentioned
perf_df = df[df['has_performance']]
print(f"\nCommits mentioning specific metrics:")
print(f"TTFT mentions: {perf_df['test_details'].str.contains('TTFT').sum()}")
print(f"Throughput mentions: {perf_df['test_details'].str.contains('throughput', case=False).sum()}")
```
### Text Classification Training
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
# Prepare for multi-label classification
def preprocess_function(examples):
# Create multi-label targets
labels = []
for i in range(len(examples['commit_hash'])):
label = [
int(examples['has_lm_eval'][i]),
int(examples['has_performance'][i]),
int(examples['has_serving'][i]),
int(examples['has_general_test'][i])
]
labels.append(label)
# Tokenize timeline text
tokenized = tokenizer(
examples['timeline_text'],
truncation=True,
padding='max_length',
max_length=512
)
tokenized['labels'] = labels
return tokenized
# Train a classifier to identify test types from PR text
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels=4,
problem_type="multi_label_classification"
)
```
## 🔍 Sample Data
### Example 1: Performance-focused commit
```json
{
"commit_hash": "fc542144c4477ffec1d3de6fa43e54f8fb5351e8",
"pr_url": "https://github.com/vllm-project/vllm/pull/12563",
"has_lm_eval": false,
"has_performance": true,
"has_serving": false,
"has_general_test": true,
"test_details": "PERF: tok/s, optimization | TEST: CI",
"timeline_text": "[Guided decoding performance optimization]..."
}
```
### Example 2: Comprehensive testing commit
```json
{
"commit_hash": "aea94362c9bdd08ed2b346701bdc09d278e85f66",
"pr_url": "https://github.com/vllm-project/vllm/pull/12287",
"has_lm_eval": true,
"has_performance": true,
"has_serving": true,
"has_general_test": true,
"test_details": "LM_EVAL: lm_eval, gsm8k | PERF: TTFT, ITL | SERVING: vllm serve | TEST: test, CI",
"timeline_text": "[Frontend][V1] Online serving performance improvements..."
}
```
## 🛠️ Potential Use Cases
1. **Test Type Classification**: Train models to automatically classify test types in software PRs
2. **Testing Pattern Analysis**: Study how different test types correlate in infrastructure projects
3. **Performance Optimization Research**: Analyze performance testing trends in LLM serving systems
4. **CI/CD Insights**: Understand continuous integration patterns in ML infrastructure projects
5. **Documentation Generation**: Generate test documentation from PR timelines
6. **Code Review Automation**: Build tools to automatically suggest relevant tests based on PR content
## 📚 Source
This dataset was extracted from the [vLLM project](https://github.com/vllm-project/vllm) GitHub repository PR timelines. vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.
## 🔄 Updates
- **v1.0.0** (2025-01): Initial release with 98 commits
## 📜 License
This dataset is released under the MIT License, consistent with the vLLM project's licensing.
## 📖 Citation
If you use this dataset in your research or applications, please cite:
```bibtex
@dataset{vllm_pr_test_classification_2025,
title={vLLM PR Test Classification Dataset},
author={vLLM Community Contributors},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/your-username/vllm-pr-test-classification},
note={A dataset of 98 vLLM commits with test type classifications extracted from GitHub PR timelines}
}
```
## 🤝 Contributing
If you'd like to contribute to this dataset or report issues:
1. Open an issue on the Hugging Face dataset repository
2. Submit improvements via pull requests
3. Share your use cases and findings
## ⚠️ Limitations
- Dataset size is limited to 98 commits
- Timeline text may be truncated for very long PR discussions
- Classification is based on keyword matching, which may miss context-dependent references
- Dataset represents a snapshot from specific time period of vLLM development
## 🙏 Acknowledgments
Thanks to the vLLM project maintainers and contributors for their open-source work that made this dataset possible.
|