Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
90ac41b
·
verified ·
1 Parent(s): 13e98ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -105,7 +105,7 @@ In Fig 9, we show the progression of accuracy with training for High Signal Task
105
 
106
  **Figure 9:** Average evaluation score on High-Signal tasks versus the number of tokens for 1.4 Billion parameter models. The model trained on GneissWeb consistently outperforms the ones trained on FineWeb.V1.1 and FineWeb-Edu-score-2.
107
 
108
- At 3 and 7 Billion Model Size with 100 Billion Tokens
109
 
110
  Given than training models of size 3 and 7 Billion parameters require lot more compute and so does evaluation, we have limited training to 100 billion tokens. We see that the 7 Billion parameter models do better than the 3 Billion parameter models. We also see that the models trained on GneissWeb outperform the models trained on FineWeb.V1.1 and FineWeb-Edu-score-2.
111
 
 
105
 
106
  **Figure 9:** Average evaluation score on High-Signal tasks versus the number of tokens for 1.4 Billion parameter models. The model trained on GneissWeb consistently outperforms the ones trained on FineWeb.V1.1 and FineWeb-Edu-score-2.
107
 
108
+ **At 3 and 7 Billion Model Size with 100 Billion Tokens**
109
 
110
  Given than training models of size 3 and 7 Billion parameters require lot more compute and so does evaluation, we have limited training to 100 billion tokens. We see that the 7 Billion parameter models do better than the 3 Billion parameter models. We also see that the models trained on GneissWeb outperform the models trained on FineWeb.V1.1 and FineWeb-Edu-score-2.
111