Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Grigori Fursin commited on
Commit
cdacbf2
·
unverified ·
1 Parent(s): 5d60467

Add version 2.0 with benchmark name, version, and timestamp

Browse files
Files changed (4) hide show
  1. README.md +54 -54
  2. data.json +0 -0
  3. data.parquet +2 -2
  4. processor.py → process.py +53 -1
README.md CHANGED
@@ -1,54 +1,54 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # Preparing OpenMLPerf dataset
6
-
7
- To process the semi-raw MLPerf data into the OpenMLPerf dataset, run the following command:
8
-
9
- ```bash
10
- # Untar raw files
11
-
12
- bzip2 -d semi-raw-mlperf-data.tar.bz2
13
- tar xvf semi-raw-mlperf-data.tar
14
-
15
- # Create a virtual environment
16
- python -m venv .venv
17
-
18
- # Activate the virtual environment
19
- source .venv/bin/activate
20
-
21
- # Install the required packages
22
- pip install -r requirements.txt
23
-
24
- # Run the processing script
25
- python process.py
26
- ```
27
-
28
- The processed dataset will be saved both as `data.json` and `data.parquet` in the `OpenMLPerf-dataset` directory.
29
- The `data.json` file is a JSON file containing the processed data, while the `data.parquet` file is a Parquet file containing the same data in a more efficient format for storage and processing.
30
-
31
- # Preprocessing raw MLPerf results using MLCommons CMX
32
-
33
- We preprocess official raw MLPerf data, such as [inference v5.0](https://github.com/mlcommons/inference_results_v5.0),
34
- into semi-raw format compatible with the `process.py` script, using the [MLCommons CM/CMX automation framework](https://arxiv.org/abs/2406.16791).
35
- This is done using through the ["import mlperf results"](https://github.com/mlcommons/ck/tree/master/cmx4mlops/repo/flex.task/import-mlperf-results)
36
- automation action, which we plan to document in more detail soon.
37
-
38
- # License and Copyright
39
-
40
- This project is licensed under the [Apache License 2.0](LICENSE.md).
41
-
42
- © 2025 FlexAI
43
-
44
- Portions of the data were adapted from the following MLCommons repositories,
45
- which are also licensed under the Apache 2.0 license:
46
-
47
- * [mlcommons@inference_results_v5.0](https://github.com/mlcommons/inference_results_v5.0)
48
- * [mlcommons@inference_results_v4.1](https://github.com/mlcommons/inference_results_v4.1)
49
- * [mlcommons@inference_results_v4.0](https://github.com/mlcommons/inference_results_v4.0)
50
- * [mlcommons@inference_results_v3.1](https://github.com/mlcommons/inference_results_v3.1)
51
-
52
- # Authors and maintaners
53
-
54
- [Daniel Altunay](https://www.linkedin.com/in/daltunay) and [Grigori Fursin](https://cKnowledge.org/gfursin) (FCS Labs)
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Preparing OpenMLPerf dataset
6
+
7
+ To process the semi-raw MLPerf data into the OpenMLPerf dataset, run the following command:
8
+
9
+ ```bash
10
+ # Untar raw files
11
+
12
+ bzip2 -d semi-raw-mlperf-data.tar.bz2
13
+ tar xvf semi-raw-mlperf-data.tar
14
+
15
+ # Create a virtual environment
16
+ python -m venv .venv
17
+
18
+ # Activate the virtual environment
19
+ source .venv/bin/activate
20
+
21
+ # Install the required packages
22
+ pip install -r requirements.txt
23
+
24
+ # Run the processing script
25
+ python process.py
26
+ ```
27
+
28
+ The processed dataset will be saved both as `data.json` and `data.parquet` in the `OpenMLPerf-dataset` directory.
29
+ The `data.json` file is a JSON file containing the processed data, while the `data.parquet` file is a Parquet file containing the same data in a more efficient format for storage and processing.
30
+
31
+ # Preprocessing raw MLPerf results using MLCommons CMX
32
+
33
+ We preprocess official raw MLPerf data, such as [inference v5.0](https://github.com/mlcommons/inference_results_v5.0),
34
+ into semi-raw format compatible with the `process.py` script, using the [MLCommons CM/CMX automation framework](https://arxiv.org/abs/2406.16791).
35
+ This is done using through the ["import mlperf results"](https://github.com/mlcommons/ck/tree/master/cmx4mlops/repo/flex.task/import-mlperf-results)
36
+ automation action, which we plan to document in more detail soon.
37
+
38
+ # License and Copyright
39
+
40
+ This project is licensed under the [Apache License 2.0](LICENSE.md).
41
+
42
+ © 2025 FlexAI
43
+
44
+ Portions of the data were adapted from the following MLCommons repositories,
45
+ which are also licensed under the Apache 2.0 license:
46
+
47
+ * [mlcommons@inference_results_v5.0](https://github.com/mlcommons/inference_results_v5.0)
48
+ * [mlcommons@inference_results_v4.1](https://github.com/mlcommons/inference_results_v4.1)
49
+ * [mlcommons@inference_results_v4.0](https://github.com/mlcommons/inference_results_v4.0)
50
+ * [mlcommons@inference_results_v3.1](https://github.com/mlcommons/inference_results_v3.1)
51
+
52
+ # Authors and maintaners
53
+
54
+ [Daniel Altunay](https://www.linkedin.com/in/daltunay) and [Grigori Fursin](https://cKnowledge.org/gfursin) (FCS Labs)
data.json CHANGED
The diff for this file is too large to render. See raw diff
 
data.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:845063b17e66072eb21e17c650f7745b49da8f12eb2e591827c823c041121588
3
- size 44318
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2d83fecee4037e9a4e10db98d79096c383319739d9a86b7d5e38b29e0fa054b
3
+ size 44347
processor.py → process.py RENAMED
@@ -1,4 +1,6 @@
1
- """Data processing module for MLPerf benchmark data."""
 
 
2
 
3
  import glob
4
  import json
@@ -6,6 +8,7 @@ import logging
6
  import os
7
  import re
8
  from collections import defaultdict
 
9
 
10
  import polars as pl
11
  from datasets import Dataset
@@ -135,6 +138,10 @@ def load_raw_data(base_path: str = "semi-raw-mlperf-data") -> pl.DataFrame:
135
  "system.host_processor_frequency": "system.cpu.frequency",
136
  "system.host_processor_caches": "system.cpu.caches",
137
  "system.host_processor_vcpu_count": "system.cpu.vcpu_count",
 
 
 
 
138
  }
139
 
140
  for old_name, new_name in rename_map.items():
@@ -183,6 +190,39 @@ def find_similar_configurations(
183
  return df.filter(mask)
184
 
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
  def convert_memory_to_gb(value: str) -> float | None:
187
  """Convert memory string to GB."""
188
  if value is None:
@@ -382,6 +422,17 @@ def normalize_memory_values(df: pl.DataFrame) -> pl.DataFrame:
382
  )
383
 
384
 
 
 
 
 
 
 
 
 
 
 
 
385
  def add_vendor_columns(df: pl.DataFrame) -> pl.DataFrame:
386
  """Add vendor columns based on model names."""
387
  return df.with_columns(
@@ -575,6 +626,7 @@ def process_data(base_path: str = "semi-raw-mlperf-data") -> pl.DataFrame:
575
  load_raw_data(base_path)
576
  .pipe(clean_string_values)
577
  .pipe(normalize_memory_values)
 
578
  .pipe(cast_columns)
579
  .pipe(add_vendor_columns)
580
  .pipe(normalize_interconnect_values)
 
1
+ """
2
+ Data processing module for MLPerf benchmark data.
3
+ """
4
 
5
  import glob
6
  import json
 
8
  import os
9
  import re
10
  from collections import defaultdict
11
+ from datetime import datetime
12
 
13
  import polars as pl
14
  from datasets import Dataset
 
138
  "system.host_processor_frequency": "system.cpu.frequency",
139
  "system.host_processor_caches": "system.cpu.caches",
140
  "system.host_processor_vcpu_count": "system.cpu.vcpu_count",
141
+ "benchmark_name": "benchmark.name",
142
+ "benchmark_version": "benchmark.version",
143
+ "datetime_last_commit": "datetime",
144
+ "debug_uid": "debug_uid",
145
  }
146
 
147
  for old_name, new_name in rename_map.items():
 
190
  return df.filter(mask)
191
 
192
 
193
+ def convert_datetime_to_iso(value: str) -> str | None:
194
+ """Convert datetime string to ISO 8601 format."""
195
+ if not value or value in ["", "N/A", "null"]:
196
+ MISSING_VALUES["datetime_values"].add(str(value))
197
+ return None
198
+
199
+ try:
200
+ # Handle format like "2025/04/03_22:56:53"
201
+ if "/" in value and "_" in value:
202
+ # Replace / with - and _ with T for ISO format
203
+ iso_value = value.replace("/", "-").replace("_", "T")
204
+ # Validate by parsing
205
+ datetime.fromisoformat(iso_value)
206
+ return iso_value
207
+
208
+ # Try to parse other common formats and convert to ISO
209
+ # Add more format patterns as needed
210
+ for fmt in ["%Y-%m-%d %H:%M:%S", "%Y/%m/%d %H:%M:%S", "%Y-%m-%dT%H:%M:%S"]:
211
+ try:
212
+ dt = datetime.strptime(value, fmt)
213
+ return dt.isoformat()
214
+ except ValueError:
215
+ continue
216
+
217
+ # If no format matches, log as missing value
218
+ MISSING_VALUES["datetime_values"].add(str(value))
219
+ return None
220
+
221
+ except Exception as e:
222
+ MISSING_VALUES["datetime_values"].add(str(value))
223
+ return None
224
+
225
+
226
  def convert_memory_to_gb(value: str) -> float | None:
227
  """Convert memory string to GB."""
228
  if value is None:
 
422
  )
423
 
424
 
425
+ def normalize_datetime_values(df: pl.DataFrame) -> pl.DataFrame:
426
+ """Convert datetime values to ISO 8601 format."""
427
+ if "datetime" in df.columns:
428
+ return df.with_columns(
429
+ pl.col("datetime")
430
+ .map_elements(convert_datetime_to_iso, return_dtype=str)
431
+ .alias("datetime")
432
+ )
433
+ return df
434
+
435
+
436
  def add_vendor_columns(df: pl.DataFrame) -> pl.DataFrame:
437
  """Add vendor columns based on model names."""
438
  return df.with_columns(
 
626
  load_raw_data(base_path)
627
  .pipe(clean_string_values)
628
  .pipe(normalize_memory_values)
629
+ .pipe(normalize_datetime_values)
630
  .pipe(cast_columns)
631
  .pipe(add_vendor_columns)
632
  .pipe(normalize_interconnect_values)