awni's picture
Update README.md
c1d654c verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 763529114
      num_examples: 138316
  download_size: 463402166
  dataset_size: 763529114
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

This dataset was created from DCLM. It is a small subset of 138,000 samples intended to be used as calibration data for mlx-lm quantizations.

The script used to create the datset:

import datasets

files = [
    "global-shard_01_of_10/local-shard_0_of_10/shard_00000009_processed.jsonl.zst",
    "global-shard_01_of_10/local-shard_0_of_10/shard_00000000_processed.jsonl.zst",
]
ds = datasets.load_dataset("mlfoundations/dclm-baseline-1.0", data_files=files)
ds = ds["train"]
feats = list(ds.features.keys())
feats.pop(feats.index("text"))
ds = ds.remove_columns(feats)
ds.push_to_hub("mlx-community/dclm-baseline-1.0-138k")