CloudPerfTrace / README.md
AmirShahbaz's picture
Added Citation
a08d185 verified
metadata
license: cc-by-4.0

Dataset Description

This repository contains a dataset stored in Parquet format and partitioned by application tasks. The dataset is organized under the parquet_ds/ directory, where each partition corresponds to a specific application task ID:

parquet_ds/
 ├── tasks=4/
 ├── tasks=5/
 ├── tasks=6/
 ├── tasks=7/
 ├── tasks=9/
 ├── tasks=10/
 ├── tasks=11/
 ├── tasks=13/
 ├── tasks=14/
 ├── tasks=15/
 └── tasks=16/

Task IDs and Applications

Each task ID represents one application type:

  • 4: Data Serving
  • 5: Redis
  • 6: Web Search
  • 7: Graph Analytics
  • 9: Data Analytics
  • 10: MLPerf
  • 11: HBase
  • 13: Alluxio
  • 14: Minio
  • 15: TPC-C
  • 16: Flink

Loading the Dataset

You can easily load the dataset with PyArrow:

import pyarrow.dataset as ds

dataset = ds.dataset("parquet_ds", format="parquet", partitioning="hive")
print(dataset.schema)

Schema

Each task ID represents one application type:

  • perf_ori (double): Normalized performance level between 0 and 1.
  • workload (double): Numerical identifier assigned to workload level.
  • tr_self: VM metrics for the target application.
  • lin_self: Linux Perf metrics for the target application.
  • td_self: Top-Down analysis for the target application.
  • tr_oth: VM metrics for co-located neighbor VMs.
  • lin_oth: Linux Perf metrics for neighbor VMs.
  • td_oth: Top-Down analysis for neighbor VMs.
  • tasks (int32): Application task ID (partition key).

Schema

Total coverage: 317 days of traces.

Citation

If you use this dataset in your research, please cite it as:

@misc{cloudformer2025,
  title        = {CloudFormer: An Attention-based Performance Prediction for Public Clouds with Unknown Workload},
  author       = {Shahbazinia, Amirhossein and Huang, Darong and Costero, Luis and Atienza, David},
  howpublished = {arXiv preprint arXiv:2509.03394},
  year         = {2025},
  url          = {https://arxiv.org/abs/2509.03394}
}