Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
a7a5dec
·
verified ·
1 Parent(s): 4b91f06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,8 +1,8 @@
1
- **Model Summary**
2
 
3
  Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. The models trained using GneissWeb dataset outperform those trained on FineWeb 1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 commonly used benchmarks
4
 
5
- In order to be able to reproduce GneissWeb, we provide here a [Bloom filter](https://dl.acm.org/doi/10.1145/362686.362692) representing all the document ids of FineWeb 1.1.0 whose documents are part of GneissWeb. it is of size 28GB and is of the [rbloom](https://github.com/KenanHanke/rbloom) family of Bloom filters. It is to be probed with the id column of FineWeb 1.1.0 or of Common Crawl.
6
 
7
 
8
       **Developers**: IBM Research
 
1
+ **Dataset Summary**
2
 
3
  Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. The models trained using GneissWeb dataset outperform those trained on FineWeb 1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 commonly used benchmarks
4
 
5
+
6
 
7
 
8
       **Developers**: IBM Research