Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
f3f4470
·
verified ·
1 Parent(s): 3ed65d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -198,7 +198,7 @@ Guided by the distributions of TokensPerChar and TokensPerByte in different cate
198
 
199
  In Figure 18, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model for 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Extreme_tokenized quality filter at 52.78 is higher than the baseline at 51.94.
200
 
201
-
202
 
203
  **Figure 18:** Ablation experiment comparing Extreme_tokenized Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
204
 
 
198
 
199
  In Figure 18, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model for 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Extreme_tokenized quality filter at 52.78 is higher than the baseline at 51.94.
200
 
201
+ <img src="Extreme.png" alt="Extreme.png" style="width:1000px;"/>
202
 
203
  **Figure 18:** Ablation experiment comparing Extreme_tokenized Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
204