Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
b5c2e83
·
verified ·
1 Parent(s): a59104c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,7 +11,7 @@ library_name:
11
  ## What is it?
12
  - Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
13
  - Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb V1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
14
- - [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/recipes/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
15
  - Details in the [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://huggingface.co/datasets/ibm-granite/GneissWeb/blob/main/GneissWebPaper_Feb21_2025.pdf)
16
  - Ablations model with `7B` parameters pre-trained with `350B` tokens of [GneissWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_GneissWeb.seed1), [FineWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.seed1) V1.1.0 and [FineWeb.Edu](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.Edu.seed1) For each dataset three ablation models trained subsets of 350 Billion tokens based on different seeds are being released
17
  - Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it
@@ -103,7 +103,7 @@ Given than training models of size `7 Billion` parameters require lot more compu
103
 
104
  4. [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/recipes/GneissWeb/GneissWeb.ipynb) to recreate GneissWeb using the methods described above
105
 
106
- 5. [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) to recreate GneissWeb using a bloom filter built on the document ids of GneissWeb
107
 
108
  6. [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://arxiv.org/abs/2502.14907)
109
 
 
11
  ## What is it?
12
  - Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
13
  - Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb V1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
14
+ - [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/recipes/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python_HuggingFace.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
15
  - Details in the [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://huggingface.co/datasets/ibm-granite/GneissWeb/blob/main/GneissWebPaper_Feb21_2025.pdf)
16
  - Ablations model with `7B` parameters pre-trained with `350B` tokens of [GneissWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_GneissWeb.seed1), [FineWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.seed1) V1.1.0 and [FineWeb.Edu](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.Edu.seed1) For each dataset three ablation models trained subsets of 350 Billion tokens based on different seeds are being released
17
  - Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it
 
103
 
104
  4. [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/recipes/GneissWeb/GneissWeb.ipynb) to recreate GneissWeb using the methods described above
105
 
106
+ 5. [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python_HuggingFace.ipynb) to recreate GneissWeb using a bloom filter built on the document ids of GneissWeb
107
 
108
  6. [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://arxiv.org/abs/2502.14907)
109