Update README.md
Browse files
README.md
CHANGED
@@ -15,3 +15,24 @@ configs:
|
|
15 |
- split: train
|
16 |
path: data/train-*
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
- split: train
|
16 |
path: data/train-*
|
17 |
---
|
18 |
+
|
19 |
+
This dataset was created from [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0).
|
20 |
+
It is a small subset of 138,000 samples intended to be used as calibration data for mlx-lm quantizations.
|
21 |
+
|
22 |
+
The script used to create the datset:
|
23 |
+
|
24 |
+
```python
|
25 |
+
import datasets
|
26 |
+
|
27 |
+
files = [
|
28 |
+
"global-shard_01_of_10/local-shard_0_of_10/shard_00000009_processed.jsonl.zst",
|
29 |
+
"global-shard_01_of_10/local-shard_0_of_10/shard_00000000_processed.jsonl.zst",
|
30 |
+
]
|
31 |
+
ds = datasets.load_dataset("mlfoundations/dclm-baseline-1.0", data_files=files)
|
32 |
+
ds = ds["train"]
|
33 |
+
feats = list(ds.features.keys())
|
34 |
+
feats.pop(feats.index("text"))
|
35 |
+
ds = ds.remove_columns(feats)
|
36 |
+
ds.push_to_hub("mlx-community/dclm-baseline-1.0-138k")
|
37 |
+
```
|
38 |
+
|