Datasets:

Languages:
English
License:
bhatta1 commited on
Commit
13f5e11
·
verified ·
1 Parent(s): 8cd3ca1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -97,6 +97,47 @@ The high signal tasks also show lower coefficient of variation compared to the
97
 
98
 
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
  **Dataset Summary**
102
 
 
97
 
98
 
99
 
100
+ **Combining GneissWeb Components into a Winning Recipe**
101
+
102
+ There are various ways to combine the key ingredients and build a recipe, including deciding which components to include and their order as well as designing ensemble filtering rules using multiple quality annotators. We performed rigorous ablations by combining the key ingredients in multiple variations and sequences with the aim of maximizing downstream task performance under the constraint of retaining at least 10T tokens from FineWeb.V1.1.0.
103
+
104
+
105
+
106
+ Figure 19 : Key ingredients selected for building the GneissWeb recipe
107
+
108
+ The GneissWeb recipe illustrated in Figure 1 produces the highest performance gain. The GneissWeb recipe consists of first applying the exact substring deduplication, computing category and quality annotations, and then applying the ensemble quality filter as shown in Figure 1. We obtain the GneissWeb dataset of 10T tokens by applying the GneissWeb recipe to the 15T tokens in the 96 snapshots of FineWeb-V1.1.0. We prepared GneissWeb using a version of IBM Data Prep Kit which will be released in open source in future.
109
+
110
+ Equipped with fastText classifiers, category-aware readability score filter, and category-aware extreme-tokenized documents filter, we perform ablations over various ensemble filtering rules. We first select the thresholds for category-aware readability score filter and category-aware extreme-tokenized filter as discussed in the above sections. Then, we tune the thresholds for fastText classifiers for a given ensemble filtering rule such that at least 10T tokens are retained from the 15T tokens of FineWeb-V1.1.0. Specifically, we consider the following two ensemble aggregation rules:
111
+
112
+ Using the notation
113
+
114
+ A: Custom built fastText quality filter
115
+
116
+ B: Custom built Category-aware readability score quality filter by leveraging Custom built fastText category classifier
117
+
118
+ C: Custom built Category-aware extreme_tokenized quality filter by leveraging Custom built fastText category classifier
119
+
120
+ GneissWeb Recipe:
121
+
122
+ Exact substring deduplication → ((A AND B) OR (A AND C))
123
+
124
+ GneissWeb ensemble filtering rule: A document is retained if either the fastText combination and category-aware readability score filter agree, or the fastText combination and category-aware extreme-toeknized filter agree,. Here the fastText combination is logical OR of the fastText classifiers, i.e., either of the fastText classifiers agrees. See the detailed rule in Figure 1.
125
+
126
+ Recipe2:
127
+
128
+ Exact substring deduplication → (A AND B AND C)
129
+
130
+ Ensemble filtering rule 2: A document is retained if either of the fastText classifiers agrees and category-aware readability score filter agrees and category-aware extreme tokenized filter agrees. Note that this rule is equivalent to sequentially applying the filters (in arbitrary order).
131
+
132
+ Figure 20 shows the average eval score on high-signal tasks as well as extended tasks for the filtering rules along with the baseline of FineWeb-V1.1.0. We observe that the GneissWeb filtering ensemble rule outperforms the other rule on both high-signal and extended tasks.
133
+
134
+
135
+ Figure 20 : Comparison of ablations at 7 Billion model size for 100 Billion tokens
136
+
137
+ **Conclusion and Future Work**
138
+
139
+ This blog presents GneissWeb dataset produced by IBM Research using an internal version of IBM DataPrep Kit. GneissWeb consists of 96 common-crawl snapshots outperforming some state-of-the-art datasets of comparative size. We continue to perform further data ablation experiments and plan to opensource the recipe via IBM DataPrep Kit. We are currently processing the latest 7 snapshots that we aim to include in GneissWeb after conducting further evaluations and verifications.
140
+
141
 
142
  **Dataset Summary**
143