Flyben commited on
Commit
05feffe
·
verified ·
1 Parent(s): e026d60

Upload 13 files

Browse files
BHC_Test1/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: /home/bingxing2/ailab/gaoben/models/Mistral-7B-Instruct/Mistral-7B-Instruct-v0.2/AI-ModelScope/Mistral-7B-Instruct-v0___2
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.1
BHC_Test1/adapter_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/home/bingxing2/ailab/gaoben/models/Mistral-7B-Instruct/Mistral-7B-Instruct-v0.2/AI-ModelScope/Mistral-7B-Instruct-v0___2",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.0,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 16,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "o_proj",
28
+ "gate_proj",
29
+ "q_proj",
30
+ "v_proj",
31
+ "up_proj",
32
+ "down_proj",
33
+ "k_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
+ "use_dora": false,
38
+ "use_rslora": false
39
+ }
BHC_Test1/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c3c6253a9f2e2b59bb23380c6de138cadd49183138bcfc62923a4858f9540d9
3
+ size 83946192
BHC_Test1/latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step3982
BHC_Test1/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0886b5e6b4eb6c54d008834760837138a75d96ac8156628b1654cc847af0e990
3
+ size 14244
BHC_Test1/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7c8d9564d916465ba3eebeffbbcae87e1948f601895c602a2242e200667fcc
3
+ size 1064
BHC_Test1/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
BHC_Test1/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
BHC_Test1/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
BHC_Test1/tokenizer_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "additional_special_tokens": [],
32
+ "bos_token": "<s>",
33
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
34
+ "clean_up_tokenization_spaces": false,
35
+ "eos_token": "</s>",
36
+ "extra_special_tokens": {},
37
+ "legacy": true,
38
+ "model_max_length": 1000000000000000019884624838656,
39
+ "pad_token": "</s>",
40
+ "padding_side": "right",
41
+ "sp_model_kwargs": {},
42
+ "spaces_between_special_tokens": false,
43
+ "split_special_tokens": false,
44
+ "tokenizer_class": "LlamaTokenizer",
45
+ "unk_token": "<unk>",
46
+ "use_default_system_prompt": false
47
+ }
BHC_Test1/trainer_state.json ADDED
@@ -0,0 +1,3234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 20.837148463047743,
6
+ "eval_steps": 500,
7
+ "global_step": 4000,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.052321778940483975,
14
+ "grad_norm": 0.5172898769378662,
15
+ "learning_rate": 4.9999695642048685e-05,
16
+ "loss": 1.2268,
17
+ "num_input_tokens_seen": 55264,
18
+ "step": 10
19
+ },
20
+ {
21
+ "epoch": 0.10464355788096795,
22
+ "grad_norm": 0.2547127306461334,
23
+ "learning_rate": 4.9998643550002796e-05,
24
+ "loss": 0.163,
25
+ "num_input_tokens_seen": 111136,
26
+ "step": 20
27
+ },
28
+ {
29
+ "epoch": 0.15696533682145192,
30
+ "grad_norm": 0.19801495969295502,
31
+ "learning_rate": 4.999683999797514e-05,
32
+ "loss": 0.1584,
33
+ "num_input_tokens_seen": 167232,
34
+ "step": 30
35
+ },
36
+ {
37
+ "epoch": 0.2092871157619359,
38
+ "grad_norm": 0.17607145011425018,
39
+ "learning_rate": 4.999428504018057e-05,
40
+ "loss": 0.1571,
41
+ "num_input_tokens_seen": 222368,
42
+ "step": 40
43
+ },
44
+ {
45
+ "epoch": 0.2616088947024199,
46
+ "grad_norm": 0.123909592628479,
47
+ "learning_rate": 4.999097875342117e-05,
48
+ "loss": 0.1535,
49
+ "num_input_tokens_seen": 280080,
50
+ "step": 50
51
+ },
52
+ {
53
+ "epoch": 0.31393067364290383,
54
+ "grad_norm": 0.15530619025230408,
55
+ "learning_rate": 4.998692123708403e-05,
56
+ "loss": 0.1592,
57
+ "num_input_tokens_seen": 336144,
58
+ "step": 60
59
+ },
60
+ {
61
+ "epoch": 0.36625245258338784,
62
+ "grad_norm": 0.08514747023582458,
63
+ "learning_rate": 4.998211261313822e-05,
64
+ "loss": 0.1581,
65
+ "num_input_tokens_seen": 392928,
66
+ "step": 70
67
+ },
68
+ {
69
+ "epoch": 0.4185742315238718,
70
+ "grad_norm": 0.08711712062358856,
71
+ "learning_rate": 4.997655302613111e-05,
72
+ "loss": 0.1479,
73
+ "num_input_tokens_seen": 448288,
74
+ "step": 80
75
+ },
76
+ {
77
+ "epoch": 0.4708960104643558,
78
+ "grad_norm": 0.14576469361782074,
79
+ "learning_rate": 4.997024264318406e-05,
80
+ "loss": 0.1574,
81
+ "num_input_tokens_seen": 504112,
82
+ "step": 90
83
+ },
84
+ {
85
+ "epoch": 0.5232177894048398,
86
+ "grad_norm": 0.14594055712223053,
87
+ "learning_rate": 4.9963181653987373e-05,
88
+ "loss": 0.1532,
89
+ "num_input_tokens_seen": 559824,
90
+ "step": 100
91
+ },
92
+ {
93
+ "epoch": 0.5755395683453237,
94
+ "grad_norm": 1.3401381969451904,
95
+ "learning_rate": 4.99553702707946e-05,
96
+ "loss": 0.1527,
97
+ "num_input_tokens_seen": 615856,
98
+ "step": 110
99
+ },
100
+ {
101
+ "epoch": 0.6278613472858077,
102
+ "grad_norm": 0.21902425587177277,
103
+ "learning_rate": 4.9946808728416143e-05,
104
+ "loss": 0.1464,
105
+ "num_input_tokens_seen": 672448,
106
+ "step": 120
107
+ },
108
+ {
109
+ "epoch": 0.6801831262262917,
110
+ "grad_norm": 0.24769121408462524,
111
+ "learning_rate": 4.993749728421224e-05,
112
+ "loss": 0.1545,
113
+ "num_input_tokens_seen": 727840,
114
+ "step": 130
115
+ },
116
+ {
117
+ "epoch": 0.7325049051667757,
118
+ "grad_norm": 0.2848469018936157,
119
+ "learning_rate": 4.992743621808518e-05,
120
+ "loss": 0.1448,
121
+ "num_input_tokens_seen": 784496,
122
+ "step": 140
123
+ },
124
+ {
125
+ "epoch": 0.7848266841072596,
126
+ "grad_norm": 0.31721895933151245,
127
+ "learning_rate": 4.991662583247092e-05,
128
+ "loss": 0.1458,
129
+ "num_input_tokens_seen": 840784,
130
+ "step": 150
131
+ },
132
+ {
133
+ "epoch": 0.8371484630477436,
134
+ "grad_norm": 0.21474598348140717,
135
+ "learning_rate": 4.9905066452329964e-05,
136
+ "loss": 0.1445,
137
+ "num_input_tokens_seen": 897920,
138
+ "step": 160
139
+ },
140
+ {
141
+ "epoch": 0.8894702419882276,
142
+ "grad_norm": 0.5110652446746826,
143
+ "learning_rate": 4.9892758425137643e-05,
144
+ "loss": 0.1471,
145
+ "num_input_tokens_seen": 953184,
146
+ "step": 170
147
+ },
148
+ {
149
+ "epoch": 0.9417920209287116,
150
+ "grad_norm": 0.29877740144729614,
151
+ "learning_rate": 4.987970212087363e-05,
152
+ "loss": 0.1464,
153
+ "num_input_tokens_seen": 1009168,
154
+ "step": 180
155
+ },
156
+ {
157
+ "epoch": 0.9941137998691956,
158
+ "grad_norm": 0.16597020626068115,
159
+ "learning_rate": 4.986589793201081e-05,
160
+ "loss": 0.1407,
161
+ "num_input_tokens_seen": 1065200,
162
+ "step": 190
163
+ },
164
+ {
165
+ "epoch": 1.0418574231523872,
166
+ "grad_norm": 0.2527844309806824,
167
+ "learning_rate": 4.985134627350353e-05,
168
+ "loss": 0.1492,
169
+ "num_input_tokens_seen": 1115736,
170
+ "step": 200
171
+ },
172
+ {
173
+ "epoch": 1.0941792020928711,
174
+ "grad_norm": 0.19597752392292023,
175
+ "learning_rate": 4.9836047582775084e-05,
176
+ "loss": 0.1431,
177
+ "num_input_tokens_seen": 1171656,
178
+ "step": 210
179
+ },
180
+ {
181
+ "epoch": 1.146500981033355,
182
+ "grad_norm": 0.23703065514564514,
183
+ "learning_rate": 4.9820002319704576e-05,
184
+ "loss": 0.1434,
185
+ "num_input_tokens_seen": 1229224,
186
+ "step": 220
187
+ },
188
+ {
189
+ "epoch": 1.198822759973839,
190
+ "grad_norm": 0.19536039233207703,
191
+ "learning_rate": 4.98032109666131e-05,
192
+ "loss": 0.14,
193
+ "num_input_tokens_seen": 1285992,
194
+ "step": 230
195
+ },
196
+ {
197
+ "epoch": 1.251144538914323,
198
+ "grad_norm": 0.3111051321029663,
199
+ "learning_rate": 4.978567402824924e-05,
200
+ "loss": 0.1355,
201
+ "num_input_tokens_seen": 1341256,
202
+ "step": 240
203
+ },
204
+ {
205
+ "epoch": 1.3034663178548072,
206
+ "grad_norm": 0.19106437265872955,
207
+ "learning_rate": 4.97673920317739e-05,
208
+ "loss": 0.1425,
209
+ "num_input_tokens_seen": 1396392,
210
+ "step": 250
211
+ },
212
+ {
213
+ "epoch": 1.355788096795291,
214
+ "grad_norm": 0.30303388833999634,
215
+ "learning_rate": 4.9748365526744423e-05,
216
+ "loss": 0.1343,
217
+ "num_input_tokens_seen": 1454088,
218
+ "step": 260
219
+ },
220
+ {
221
+ "epoch": 1.408109875735775,
222
+ "grad_norm": 0.3765646815299988,
223
+ "learning_rate": 4.972859508509816e-05,
224
+ "loss": 0.1343,
225
+ "num_input_tokens_seen": 1509656,
226
+ "step": 270
227
+ },
228
+ {
229
+ "epoch": 1.460431654676259,
230
+ "grad_norm": 0.3608955442905426,
231
+ "learning_rate": 4.9708081301135155e-05,
232
+ "loss": 0.1377,
233
+ "num_input_tokens_seen": 1565976,
234
+ "step": 280
235
+ },
236
+ {
237
+ "epoch": 1.512753433616743,
238
+ "grad_norm": 0.19421738386154175,
239
+ "learning_rate": 4.9686824791500396e-05,
240
+ "loss": 0.1381,
241
+ "num_input_tokens_seen": 1621544,
242
+ "step": 290
243
+ },
244
+ {
245
+ "epoch": 1.565075212557227,
246
+ "grad_norm": 0.6034450531005859,
247
+ "learning_rate": 4.96648261951652e-05,
248
+ "loss": 0.1339,
249
+ "num_input_tokens_seen": 1677624,
250
+ "step": 300
251
+ },
252
+ {
253
+ "epoch": 1.6173969914977109,
254
+ "grad_norm": 0.28776681423187256,
255
+ "learning_rate": 4.964208617340803e-05,
256
+ "loss": 0.1351,
257
+ "num_input_tokens_seen": 1733192,
258
+ "step": 310
259
+ },
260
+ {
261
+ "epoch": 1.669718770438195,
262
+ "grad_norm": 0.15169182419776917,
263
+ "learning_rate": 4.961860540979464e-05,
264
+ "loss": 0.1406,
265
+ "num_input_tokens_seen": 1788648,
266
+ "step": 320
267
+ },
268
+ {
269
+ "epoch": 1.7220405493786788,
270
+ "grad_norm": 0.24536730349063873,
271
+ "learning_rate": 4.9594384610157483e-05,
272
+ "loss": 0.1354,
273
+ "num_input_tokens_seen": 1844360,
274
+ "step": 330
275
+ },
276
+ {
277
+ "epoch": 1.774362328319163,
278
+ "grad_norm": 0.4901825487613678,
279
+ "learning_rate": 4.9569424502574544e-05,
280
+ "loss": 0.1318,
281
+ "num_input_tokens_seen": 1900888,
282
+ "step": 340
283
+ },
284
+ {
285
+ "epoch": 1.8266841072596467,
286
+ "grad_norm": 0.33420756459236145,
287
+ "learning_rate": 4.954372583734741e-05,
288
+ "loss": 0.1324,
289
+ "num_input_tokens_seen": 1956840,
290
+ "step": 350
291
+ },
292
+ {
293
+ "epoch": 1.8790058862001309,
294
+ "grad_norm": 0.24992604553699493,
295
+ "learning_rate": 4.951728938697872e-05,
296
+ "loss": 0.1352,
297
+ "num_input_tokens_seen": 2013624,
298
+ "step": 360
299
+ },
300
+ {
301
+ "epoch": 1.9313276651406148,
302
+ "grad_norm": 0.3095349669456482,
303
+ "learning_rate": 4.9490115946148985e-05,
304
+ "loss": 0.1332,
305
+ "num_input_tokens_seen": 2069768,
306
+ "step": 370
307
+ },
308
+ {
309
+ "epoch": 1.9836494440810988,
310
+ "grad_norm": 0.35465294122695923,
311
+ "learning_rate": 4.946220633169266e-05,
312
+ "loss": 0.128,
313
+ "num_input_tokens_seen": 2125768,
314
+ "step": 380
315
+ },
316
+ {
317
+ "epoch": 2.0313930673642906,
318
+ "grad_norm": 0.2656111419200897,
319
+ "learning_rate": 4.943356138257359e-05,
320
+ "loss": 0.1356,
321
+ "num_input_tokens_seen": 2176424,
322
+ "step": 390
323
+ },
324
+ {
325
+ "epoch": 2.0837148463047743,
326
+ "grad_norm": 0.16828453540802002,
327
+ "learning_rate": 4.940418195985983e-05,
328
+ "loss": 0.1332,
329
+ "num_input_tokens_seen": 2232200,
330
+ "step": 400
331
+ },
332
+ {
333
+ "epoch": 2.1360366252452585,
334
+ "grad_norm": 0.22813495993614197,
335
+ "learning_rate": 4.9374068946697695e-05,
336
+ "loss": 0.1296,
337
+ "num_input_tokens_seen": 2289432,
338
+ "step": 410
339
+ },
340
+ {
341
+ "epoch": 2.1883584041857422,
342
+ "grad_norm": 0.30772924423217773,
343
+ "learning_rate": 4.934322324828529e-05,
344
+ "loss": 0.1288,
345
+ "num_input_tokens_seen": 2345576,
346
+ "step": 420
347
+ },
348
+ {
349
+ "epoch": 2.2406801831262264,
350
+ "grad_norm": 0.29362913966178894,
351
+ "learning_rate": 4.931164579184523e-05,
352
+ "loss": 0.1307,
353
+ "num_input_tokens_seen": 2400696,
354
+ "step": 430
355
+ },
356
+ {
357
+ "epoch": 2.29300196206671,
358
+ "grad_norm": 0.21813659369945526,
359
+ "learning_rate": 4.9279337526596814e-05,
360
+ "loss": 0.1292,
361
+ "num_input_tokens_seen": 2456552,
362
+ "step": 440
363
+ },
364
+ {
365
+ "epoch": 2.3453237410071943,
366
+ "grad_norm": 0.3129786550998688,
367
+ "learning_rate": 4.924629942372748e-05,
368
+ "loss": 0.1329,
369
+ "num_input_tokens_seen": 2512808,
370
+ "step": 450
371
+ },
372
+ {
373
+ "epoch": 2.397645519947678,
374
+ "grad_norm": 0.19762465357780457,
375
+ "learning_rate": 4.9212532476363596e-05,
376
+ "loss": 0.1261,
377
+ "num_input_tokens_seen": 2569016,
378
+ "step": 460
379
+ },
380
+ {
381
+ "epoch": 2.4499672988881622,
382
+ "grad_norm": 0.30003225803375244,
383
+ "learning_rate": 4.917803769954062e-05,
384
+ "loss": 0.124,
385
+ "num_input_tokens_seen": 2625208,
386
+ "step": 470
387
+ },
388
+ {
389
+ "epoch": 2.502289077828646,
390
+ "grad_norm": 0.2731872797012329,
391
+ "learning_rate": 4.9142816130172596e-05,
392
+ "loss": 0.1285,
393
+ "num_input_tokens_seen": 2680824,
394
+ "step": 480
395
+ },
396
+ {
397
+ "epoch": 2.55461085676913,
398
+ "grad_norm": 0.25849199295043945,
399
+ "learning_rate": 4.9106868827020955e-05,
400
+ "loss": 0.1323,
401
+ "num_input_tokens_seen": 2737304,
402
+ "step": 490
403
+ },
404
+ {
405
+ "epoch": 2.6069326357096143,
406
+ "grad_norm": 0.24799242615699768,
407
+ "learning_rate": 4.907019687066271e-05,
408
+ "loss": 0.1299,
409
+ "num_input_tokens_seen": 2793432,
410
+ "step": 500
411
+ },
412
+ {
413
+ "epoch": 2.659254414650098,
414
+ "grad_norm": 0.1981252282857895,
415
+ "learning_rate": 4.9032801363458e-05,
416
+ "loss": 0.1281,
417
+ "num_input_tokens_seen": 2850008,
418
+ "step": 510
419
+ },
420
+ {
421
+ "epoch": 2.711576193590582,
422
+ "grad_norm": 0.40522608160972595,
423
+ "learning_rate": 4.8994683429516896e-05,
424
+ "loss": 0.1309,
425
+ "num_input_tokens_seen": 2905304,
426
+ "step": 520
427
+ },
428
+ {
429
+ "epoch": 2.763897972531066,
430
+ "grad_norm": 0.2624233067035675,
431
+ "learning_rate": 4.895584421466565e-05,
432
+ "loss": 0.1271,
433
+ "num_input_tokens_seen": 2961112,
434
+ "step": 530
435
+ },
436
+ {
437
+ "epoch": 2.81621975147155,
438
+ "grad_norm": 0.28190159797668457,
439
+ "learning_rate": 4.8916284886412214e-05,
440
+ "loss": 0.1222,
441
+ "num_input_tokens_seen": 3017208,
442
+ "step": 540
443
+ },
444
+ {
445
+ "epoch": 2.868541530412034,
446
+ "grad_norm": 0.21756145358085632,
447
+ "learning_rate": 4.887600663391122e-05,
448
+ "loss": 0.1288,
449
+ "num_input_tokens_seen": 3074216,
450
+ "step": 550
451
+ },
452
+ {
453
+ "epoch": 2.920863309352518,
454
+ "grad_norm": 0.33730047941207886,
455
+ "learning_rate": 4.883501066792814e-05,
456
+ "loss": 0.1267,
457
+ "num_input_tokens_seen": 3129784,
458
+ "step": 560
459
+ },
460
+ {
461
+ "epoch": 2.973185088293002,
462
+ "grad_norm": 0.24758093059062958,
463
+ "learning_rate": 4.8793298220802963e-05,
464
+ "loss": 0.1288,
465
+ "num_input_tokens_seen": 3187048,
466
+ "step": 570
467
+ },
468
+ {
469
+ "epoch": 3.0209287115761936,
470
+ "grad_norm": 0.15806905925273895,
471
+ "learning_rate": 4.87508705464131e-05,
472
+ "loss": 0.1316,
473
+ "num_input_tokens_seen": 3238608,
474
+ "step": 580
475
+ },
476
+ {
477
+ "epoch": 3.0732504905166778,
478
+ "grad_norm": 0.28546327352523804,
479
+ "learning_rate": 4.8707728920135744e-05,
480
+ "loss": 0.1256,
481
+ "num_input_tokens_seen": 3294400,
482
+ "step": 590
483
+ },
484
+ {
485
+ "epoch": 3.1255722694571615,
486
+ "grad_norm": 0.3583323359489441,
487
+ "learning_rate": 4.866387463880947e-05,
488
+ "loss": 0.1223,
489
+ "num_input_tokens_seen": 3350560,
490
+ "step": 600
491
+ },
492
+ {
493
+ "epoch": 3.1778940483976457,
494
+ "grad_norm": 0.21477927267551422,
495
+ "learning_rate": 4.861930902069531e-05,
496
+ "loss": 0.1192,
497
+ "num_input_tokens_seen": 3406256,
498
+ "step": 610
499
+ },
500
+ {
501
+ "epoch": 3.2302158273381294,
502
+ "grad_norm": 0.3112389147281647,
503
+ "learning_rate": 4.8574033405437094e-05,
504
+ "loss": 0.1209,
505
+ "num_input_tokens_seen": 3461680,
506
+ "step": 620
507
+ },
508
+ {
509
+ "epoch": 3.2825376062786136,
510
+ "grad_norm": 0.22634848952293396,
511
+ "learning_rate": 4.8528049154021186e-05,
512
+ "loss": 0.1318,
513
+ "num_input_tokens_seen": 3517984,
514
+ "step": 630
515
+ },
516
+ {
517
+ "epoch": 3.3348593852190973,
518
+ "grad_norm": 0.2349083423614502,
519
+ "learning_rate": 4.848135764873557e-05,
520
+ "loss": 0.1264,
521
+ "num_input_tokens_seen": 3573376,
522
+ "step": 640
523
+ },
524
+ {
525
+ "epoch": 3.3871811641595815,
526
+ "grad_norm": 0.29488080739974976,
527
+ "learning_rate": 4.843396029312832e-05,
528
+ "loss": 0.1238,
529
+ "num_input_tokens_seen": 3630544,
530
+ "step": 650
531
+ },
532
+ {
533
+ "epoch": 3.439502943100065,
534
+ "grad_norm": 0.28836989402770996,
535
+ "learning_rate": 4.838585851196537e-05,
536
+ "loss": 0.124,
537
+ "num_input_tokens_seen": 3686432,
538
+ "step": 660
539
+ },
540
+ {
541
+ "epoch": 3.4918247220405494,
542
+ "grad_norm": 0.2634967267513275,
543
+ "learning_rate": 4.833705375118772e-05,
544
+ "loss": 0.1212,
545
+ "num_input_tokens_seen": 3741552,
546
+ "step": 670
547
+ },
548
+ {
549
+ "epoch": 3.544146500981033,
550
+ "grad_norm": 0.30187705159187317,
551
+ "learning_rate": 4.828754747786796e-05,
552
+ "loss": 0.1225,
553
+ "num_input_tokens_seen": 3797760,
554
+ "step": 680
555
+ },
556
+ {
557
+ "epoch": 3.5964682799215173,
558
+ "grad_norm": 0.31589755415916443,
559
+ "learning_rate": 4.823734118016616e-05,
560
+ "loss": 0.1236,
561
+ "num_input_tokens_seen": 3854704,
562
+ "step": 690
563
+ },
564
+ {
565
+ "epoch": 3.6487900588620015,
566
+ "grad_norm": 0.24722936749458313,
567
+ "learning_rate": 4.818643636728515e-05,
568
+ "loss": 0.1154,
569
+ "num_input_tokens_seen": 3910544,
570
+ "step": 700
571
+ },
572
+ {
573
+ "epoch": 3.701111837802485,
574
+ "grad_norm": 0.20096950232982635,
575
+ "learning_rate": 4.813483456942515e-05,
576
+ "loss": 0.119,
577
+ "num_input_tokens_seen": 3966448,
578
+ "step": 710
579
+ },
580
+ {
581
+ "epoch": 3.7534336167429694,
582
+ "grad_norm": 0.22696824371814728,
583
+ "learning_rate": 4.808253733773775e-05,
584
+ "loss": 0.1277,
585
+ "num_input_tokens_seen": 4022880,
586
+ "step": 720
587
+ },
588
+ {
589
+ "epoch": 3.805755395683453,
590
+ "grad_norm": 0.2600644826889038,
591
+ "learning_rate": 4.8029546244279346e-05,
592
+ "loss": 0.1245,
593
+ "num_input_tokens_seen": 4079280,
594
+ "step": 730
595
+ },
596
+ {
597
+ "epoch": 3.8580771746239373,
598
+ "grad_norm": 0.3461725115776062,
599
+ "learning_rate": 4.797586288196378e-05,
600
+ "loss": 0.1245,
601
+ "num_input_tokens_seen": 4136096,
602
+ "step": 740
603
+ },
604
+ {
605
+ "epoch": 3.910398953564421,
606
+ "grad_norm": 0.20807071030139923,
607
+ "learning_rate": 4.792148886451456e-05,
608
+ "loss": 0.1238,
609
+ "num_input_tokens_seen": 4192832,
610
+ "step": 750
611
+ },
612
+ {
613
+ "epoch": 3.962720732504905,
614
+ "grad_norm": 0.25355812907218933,
615
+ "learning_rate": 4.7866425826416316e-05,
616
+ "loss": 0.1249,
617
+ "num_input_tokens_seen": 4248368,
618
+ "step": 760
619
+ },
620
+ {
621
+ "epoch": 4.010464355788097,
622
+ "grad_norm": 0.28272193670272827,
623
+ "learning_rate": 4.781067542286561e-05,
624
+ "loss": 0.1245,
625
+ "num_input_tokens_seen": 4299232,
626
+ "step": 770
627
+ },
628
+ {
629
+ "epoch": 4.062786134728581,
630
+ "grad_norm": 0.39174169301986694,
631
+ "learning_rate": 4.7754239329721274e-05,
632
+ "loss": 0.1216,
633
+ "num_input_tokens_seen": 4356192,
634
+ "step": 780
635
+ },
636
+ {
637
+ "epoch": 4.115107913669065,
638
+ "grad_norm": 0.2807229459285736,
639
+ "learning_rate": 4.769711924345397e-05,
640
+ "loss": 0.1195,
641
+ "num_input_tokens_seen": 4411648,
642
+ "step": 790
643
+ },
644
+ {
645
+ "epoch": 4.167429692609549,
646
+ "grad_norm": 0.24898973107337952,
647
+ "learning_rate": 4.763931688109524e-05,
648
+ "loss": 0.1174,
649
+ "num_input_tokens_seen": 4467568,
650
+ "step": 800
651
+ },
652
+ {
653
+ "epoch": 4.219751471550032,
654
+ "grad_norm": 0.3005022704601288,
655
+ "learning_rate": 4.7580833980185816e-05,
656
+ "loss": 0.1251,
657
+ "num_input_tokens_seen": 4522624,
658
+ "step": 810
659
+ },
660
+ {
661
+ "epoch": 4.272073250490517,
662
+ "grad_norm": 0.3487663269042969,
663
+ "learning_rate": 4.7521672298723495e-05,
664
+ "loss": 0.1182,
665
+ "num_input_tokens_seen": 4578640,
666
+ "step": 820
667
+ },
668
+ {
669
+ "epoch": 4.324395029431001,
670
+ "grad_norm": 0.19975335896015167,
671
+ "learning_rate": 4.7461833615110194e-05,
672
+ "loss": 0.1211,
673
+ "num_input_tokens_seen": 4633712,
674
+ "step": 830
675
+ },
676
+ {
677
+ "epoch": 4.3767168083714845,
678
+ "grad_norm": 0.2742094397544861,
679
+ "learning_rate": 4.740131972809856e-05,
680
+ "loss": 0.1208,
681
+ "num_input_tokens_seen": 4690160,
682
+ "step": 840
683
+ },
684
+ {
685
+ "epoch": 4.429038587311968,
686
+ "grad_norm": 0.49232161045074463,
687
+ "learning_rate": 4.734013245673788e-05,
688
+ "loss": 0.1213,
689
+ "num_input_tokens_seen": 4747104,
690
+ "step": 850
691
+ },
692
+ {
693
+ "epoch": 4.481360366252453,
694
+ "grad_norm": 0.3197735548019409,
695
+ "learning_rate": 4.727827364031936e-05,
696
+ "loss": 0.1137,
697
+ "num_input_tokens_seen": 4803376,
698
+ "step": 860
699
+ },
700
+ {
701
+ "epoch": 4.5336821451929366,
702
+ "grad_norm": 0.43819674849510193,
703
+ "learning_rate": 4.721574513832091e-05,
704
+ "loss": 0.1163,
705
+ "num_input_tokens_seen": 4859840,
706
+ "step": 870
707
+ },
708
+ {
709
+ "epoch": 4.58600392413342,
710
+ "grad_norm": 0.21390356123447418,
711
+ "learning_rate": 4.715254883035119e-05,
712
+ "loss": 0.121,
713
+ "num_input_tokens_seen": 4916272,
714
+ "step": 880
715
+ },
716
+ {
717
+ "epoch": 4.638325703073905,
718
+ "grad_norm": 0.30768075585365295,
719
+ "learning_rate": 4.708868661609314e-05,
720
+ "loss": 0.1194,
721
+ "num_input_tokens_seen": 4971728,
722
+ "step": 890
723
+ },
724
+ {
725
+ "epoch": 4.690647482014389,
726
+ "grad_norm": 0.34879791736602783,
727
+ "learning_rate": 4.702416041524683e-05,
728
+ "loss": 0.1223,
729
+ "num_input_tokens_seen": 5027680,
730
+ "step": 900
731
+ },
732
+ {
733
+ "epoch": 4.742969260954872,
734
+ "grad_norm": 0.23973870277404785,
735
+ "learning_rate": 4.695897216747183e-05,
736
+ "loss": 0.1225,
737
+ "num_input_tokens_seen": 5083392,
738
+ "step": 910
739
+ },
740
+ {
741
+ "epoch": 4.795291039895356,
742
+ "grad_norm": 0.33110499382019043,
743
+ "learning_rate": 4.689312383232883e-05,
744
+ "loss": 0.1248,
745
+ "num_input_tokens_seen": 5140112,
746
+ "step": 920
747
+ },
748
+ {
749
+ "epoch": 4.847612818835841,
750
+ "grad_norm": 0.2787615954875946,
751
+ "learning_rate": 4.682661738922078e-05,
752
+ "loss": 0.1204,
753
+ "num_input_tokens_seen": 5195072,
754
+ "step": 930
755
+ },
756
+ {
757
+ "epoch": 4.8999345977763245,
758
+ "grad_norm": 0.20854991674423218,
759
+ "learning_rate": 4.6759454837333376e-05,
760
+ "loss": 0.1181,
761
+ "num_input_tokens_seen": 5251408,
762
+ "step": 940
763
+ },
764
+ {
765
+ "epoch": 4.952256376716808,
766
+ "grad_norm": 0.3582795262336731,
767
+ "learning_rate": 4.6691638195574963e-05,
768
+ "loss": 0.118,
769
+ "num_input_tokens_seen": 5307776,
770
+ "step": 950
771
+ },
772
+ {
773
+ "epoch": 5.0,
774
+ "grad_norm": 0.18520788848400116,
775
+ "learning_rate": 4.662316950251584e-05,
776
+ "loss": 0.1192,
777
+ "num_input_tokens_seen": 5359280,
778
+ "step": 960
779
+ },
780
+ {
781
+ "epoch": 5.052321778940484,
782
+ "grad_norm": 0.2574616074562073,
783
+ "learning_rate": 4.655405081632699e-05,
784
+ "loss": 0.1179,
785
+ "num_input_tokens_seen": 5415968,
786
+ "step": 970
787
+ },
788
+ {
789
+ "epoch": 5.104643557880968,
790
+ "grad_norm": 0.3383731544017792,
791
+ "learning_rate": 4.648428421471822e-05,
792
+ "loss": 0.1137,
793
+ "num_input_tokens_seen": 5471840,
794
+ "step": 980
795
+ },
796
+ {
797
+ "epoch": 5.156965336821452,
798
+ "grad_norm": 0.3150075376033783,
799
+ "learning_rate": 4.641387179487569e-05,
800
+ "loss": 0.1179,
801
+ "num_input_tokens_seen": 5528224,
802
+ "step": 990
803
+ },
804
+ {
805
+ "epoch": 5.209287115761936,
806
+ "grad_norm": 0.2862718999385834,
807
+ "learning_rate": 4.634281567339885e-05,
808
+ "loss": 0.1117,
809
+ "num_input_tokens_seen": 5583680,
810
+ "step": 1000
811
+ },
812
+ {
813
+ "epoch": 5.2616088947024195,
814
+ "grad_norm": 0.32703572511672974,
815
+ "learning_rate": 4.627111798623688e-05,
816
+ "loss": 0.12,
817
+ "num_input_tokens_seen": 5638640,
818
+ "step": 1010
819
+ },
820
+ {
821
+ "epoch": 5.313930673642904,
822
+ "grad_norm": 0.3088448941707611,
823
+ "learning_rate": 4.619878088862443e-05,
824
+ "loss": 0.1134,
825
+ "num_input_tokens_seen": 5694208,
826
+ "step": 1020
827
+ },
828
+ {
829
+ "epoch": 5.366252452583388,
830
+ "grad_norm": 0.3116937279701233,
831
+ "learning_rate": 4.612580655501683e-05,
832
+ "loss": 0.1178,
833
+ "num_input_tokens_seen": 5749696,
834
+ "step": 1030
835
+ },
836
+ {
837
+ "epoch": 5.418574231523872,
838
+ "grad_norm": 0.3191389739513397,
839
+ "learning_rate": 4.605219717902476e-05,
840
+ "loss": 0.1136,
841
+ "num_input_tokens_seen": 5805264,
842
+ "step": 1040
843
+ },
844
+ {
845
+ "epoch": 5.470896010464356,
846
+ "grad_norm": 0.31546375155448914,
847
+ "learning_rate": 4.5977954973348294e-05,
848
+ "loss": 0.1167,
849
+ "num_input_tokens_seen": 5861616,
850
+ "step": 1050
851
+ },
852
+ {
853
+ "epoch": 5.52321778940484,
854
+ "grad_norm": 0.2683027386665344,
855
+ "learning_rate": 4.590308216971038e-05,
856
+ "loss": 0.1113,
857
+ "num_input_tokens_seen": 5918816,
858
+ "step": 1060
859
+ },
860
+ {
861
+ "epoch": 5.575539568345324,
862
+ "grad_norm": 0.26271599531173706,
863
+ "learning_rate": 4.582758101878977e-05,
864
+ "loss": 0.1113,
865
+ "num_input_tokens_seen": 5975184,
866
+ "step": 1070
867
+ },
868
+ {
869
+ "epoch": 5.6278613472858074,
870
+ "grad_norm": 0.2612985372543335,
871
+ "learning_rate": 4.5751453790153325e-05,
872
+ "loss": 0.1143,
873
+ "num_input_tokens_seen": 6030736,
874
+ "step": 1080
875
+ },
876
+ {
877
+ "epoch": 5.680183126226292,
878
+ "grad_norm": 0.29255977272987366,
879
+ "learning_rate": 4.567470277218786e-05,
880
+ "loss": 0.1159,
881
+ "num_input_tokens_seen": 6086848,
882
+ "step": 1090
883
+ },
884
+ {
885
+ "epoch": 5.732504905166776,
886
+ "grad_norm": 0.31331729888916016,
887
+ "learning_rate": 4.55973302720313e-05,
888
+ "loss": 0.1114,
889
+ "num_input_tokens_seen": 6142544,
890
+ "step": 1100
891
+ },
892
+ {
893
+ "epoch": 5.7848266841072595,
894
+ "grad_norm": 0.24986866116523743,
895
+ "learning_rate": 4.551933861550333e-05,
896
+ "loss": 0.1173,
897
+ "num_input_tokens_seen": 6199856,
898
+ "step": 1110
899
+ },
900
+ {
901
+ "epoch": 5.837148463047743,
902
+ "grad_norm": 0.26383090019226074,
903
+ "learning_rate": 4.5440730147035516e-05,
904
+ "loss": 0.1166,
905
+ "num_input_tokens_seen": 6255488,
906
+ "step": 1120
907
+ },
908
+ {
909
+ "epoch": 5.889470241988228,
910
+ "grad_norm": 0.28010377287864685,
911
+ "learning_rate": 4.5361507229600784e-05,
912
+ "loss": 0.1148,
913
+ "num_input_tokens_seen": 6311696,
914
+ "step": 1130
915
+ },
916
+ {
917
+ "epoch": 5.941792020928712,
918
+ "grad_norm": 0.2781098783016205,
919
+ "learning_rate": 4.528167224464245e-05,
920
+ "loss": 0.1152,
921
+ "num_input_tokens_seen": 6368064,
922
+ "step": 1140
923
+ },
924
+ {
925
+ "epoch": 5.994113799869195,
926
+ "grad_norm": 0.35192057490348816,
927
+ "learning_rate": 4.520122759200256e-05,
928
+ "loss": 0.1087,
929
+ "num_input_tokens_seen": 6424048,
930
+ "step": 1150
931
+ },
932
+ {
933
+ "epoch": 6.041857423152387,
934
+ "grad_norm": 0.408779114484787,
935
+ "learning_rate": 4.512017568984982e-05,
936
+ "loss": 0.1094,
937
+ "num_input_tokens_seen": 6475464,
938
+ "step": 1160
939
+ },
940
+ {
941
+ "epoch": 6.094179202092871,
942
+ "grad_norm": 0.5394303798675537,
943
+ "learning_rate": 4.503851897460686e-05,
944
+ "loss": 0.1005,
945
+ "num_input_tokens_seen": 6531112,
946
+ "step": 1170
947
+ },
948
+ {
949
+ "epoch": 6.1465009810333555,
950
+ "grad_norm": 0.4771885871887207,
951
+ "learning_rate": 4.4956259900877005e-05,
952
+ "loss": 0.107,
953
+ "num_input_tokens_seen": 6587352,
954
+ "step": 1180
955
+ },
956
+ {
957
+ "epoch": 6.198822759973839,
958
+ "grad_norm": 0.4851493835449219,
959
+ "learning_rate": 4.4873400941370506e-05,
960
+ "loss": 0.1093,
961
+ "num_input_tokens_seen": 6643608,
962
+ "step": 1190
963
+ },
964
+ {
965
+ "epoch": 6.251144538914323,
966
+ "grad_norm": 0.47404423356056213,
967
+ "learning_rate": 4.4789944586830196e-05,
968
+ "loss": 0.1082,
969
+ "num_input_tokens_seen": 6700616,
970
+ "step": 1200
971
+ },
972
+ {
973
+ "epoch": 6.303466317854807,
974
+ "grad_norm": 0.4265010356903076,
975
+ "learning_rate": 4.470589334595662e-05,
976
+ "loss": 0.1088,
977
+ "num_input_tokens_seen": 6756344,
978
+ "step": 1210
979
+ },
980
+ {
981
+ "epoch": 6.355788096795291,
982
+ "grad_norm": 0.2776981592178345,
983
+ "learning_rate": 4.462124974533261e-05,
984
+ "loss": 0.1124,
985
+ "num_input_tokens_seen": 6813144,
986
+ "step": 1220
987
+ },
988
+ {
989
+ "epoch": 6.408109875735775,
990
+ "grad_norm": 0.3706200122833252,
991
+ "learning_rate": 4.453601632934737e-05,
992
+ "loss": 0.1095,
993
+ "num_input_tokens_seen": 6868280,
994
+ "step": 1230
995
+ },
996
+ {
997
+ "epoch": 6.460431654676259,
998
+ "grad_norm": 0.4983665943145752,
999
+ "learning_rate": 4.4450195660119965e-05,
1000
+ "loss": 0.1114,
1001
+ "num_input_tokens_seen": 6924296,
1002
+ "step": 1240
1003
+ },
1004
+ {
1005
+ "epoch": 6.5127534336167425,
1006
+ "grad_norm": 0.38131120800971985,
1007
+ "learning_rate": 4.4363790317422314e-05,
1008
+ "loss": 0.1141,
1009
+ "num_input_tokens_seen": 6980392,
1010
+ "step": 1250
1011
+ },
1012
+ {
1013
+ "epoch": 6.565075212557227,
1014
+ "grad_norm": 0.24885649979114532,
1015
+ "learning_rate": 4.427680289860163e-05,
1016
+ "loss": 0.1128,
1017
+ "num_input_tokens_seen": 7036056,
1018
+ "step": 1260
1019
+ },
1020
+ {
1021
+ "epoch": 6.617396991497711,
1022
+ "grad_norm": 0.4561365246772766,
1023
+ "learning_rate": 4.4189236018502356e-05,
1024
+ "loss": 0.1149,
1025
+ "num_input_tokens_seen": 7092152,
1026
+ "step": 1270
1027
+ },
1028
+ {
1029
+ "epoch": 6.669718770438195,
1030
+ "grad_norm": 0.548459529876709,
1031
+ "learning_rate": 4.410109230938755e-05,
1032
+ "loss": 0.1079,
1033
+ "num_input_tokens_seen": 7147096,
1034
+ "step": 1280
1035
+ },
1036
+ {
1037
+ "epoch": 6.722040549378679,
1038
+ "grad_norm": 0.46682003140449524,
1039
+ "learning_rate": 4.4012374420859786e-05,
1040
+ "loss": 0.1061,
1041
+ "num_input_tokens_seen": 7202584,
1042
+ "step": 1290
1043
+ },
1044
+ {
1045
+ "epoch": 6.774362328319163,
1046
+ "grad_norm": 0.32285481691360474,
1047
+ "learning_rate": 4.392308501978148e-05,
1048
+ "loss": 0.1098,
1049
+ "num_input_tokens_seen": 7258552,
1050
+ "step": 1300
1051
+ },
1052
+ {
1053
+ "epoch": 6.826684107259647,
1054
+ "grad_norm": 0.3166508078575134,
1055
+ "learning_rate": 4.383322679019472e-05,
1056
+ "loss": 0.1119,
1057
+ "num_input_tokens_seen": 7315768,
1058
+ "step": 1310
1059
+ },
1060
+ {
1061
+ "epoch": 6.87900588620013,
1062
+ "grad_norm": 0.35454803705215454,
1063
+ "learning_rate": 4.3742802433240625e-05,
1064
+ "loss": 0.1107,
1065
+ "num_input_tokens_seen": 7371352,
1066
+ "step": 1320
1067
+ },
1068
+ {
1069
+ "epoch": 6.931327665140615,
1070
+ "grad_norm": 0.2988300919532776,
1071
+ "learning_rate": 4.3651814667078086e-05,
1072
+ "loss": 0.1085,
1073
+ "num_input_tokens_seen": 7427800,
1074
+ "step": 1330
1075
+ },
1076
+ {
1077
+ "epoch": 6.983649444081099,
1078
+ "grad_norm": 0.3938722014427185,
1079
+ "learning_rate": 4.35602662268021e-05,
1080
+ "loss": 0.1064,
1081
+ "num_input_tokens_seen": 7484616,
1082
+ "step": 1340
1083
+ },
1084
+ {
1085
+ "epoch": 7.031393067364291,
1086
+ "grad_norm": 0.45549729466438293,
1087
+ "learning_rate": 4.346815986436158e-05,
1088
+ "loss": 0.1041,
1089
+ "num_input_tokens_seen": 7534896,
1090
+ "step": 1350
1091
+ },
1092
+ {
1093
+ "epoch": 7.083714846304774,
1094
+ "grad_norm": 0.6130596995353699,
1095
+ "learning_rate": 4.337549834847655e-05,
1096
+ "loss": 0.0977,
1097
+ "num_input_tokens_seen": 7591328,
1098
+ "step": 1360
1099
+ },
1100
+ {
1101
+ "epoch": 7.136036625245258,
1102
+ "grad_norm": 0.6437245607376099,
1103
+ "learning_rate": 4.328228446455498e-05,
1104
+ "loss": 0.0979,
1105
+ "num_input_tokens_seen": 7647760,
1106
+ "step": 1370
1107
+ },
1108
+ {
1109
+ "epoch": 7.188358404185743,
1110
+ "grad_norm": 0.5056234002113342,
1111
+ "learning_rate": 4.3188521014609054e-05,
1112
+ "loss": 0.0994,
1113
+ "num_input_tokens_seen": 7704672,
1114
+ "step": 1380
1115
+ },
1116
+ {
1117
+ "epoch": 7.240680183126226,
1118
+ "grad_norm": 0.5103983879089355,
1119
+ "learning_rate": 4.309421081717091e-05,
1120
+ "loss": 0.1001,
1121
+ "num_input_tokens_seen": 7761104,
1122
+ "step": 1390
1123
+ },
1124
+ {
1125
+ "epoch": 7.29300196206671,
1126
+ "grad_norm": 0.47358736395835876,
1127
+ "learning_rate": 4.299935670720794e-05,
1128
+ "loss": 0.0926,
1129
+ "num_input_tokens_seen": 7818224,
1130
+ "step": 1400
1131
+ },
1132
+ {
1133
+ "epoch": 7.345323741007194,
1134
+ "grad_norm": 0.6868699789047241,
1135
+ "learning_rate": 4.290396153603755e-05,
1136
+ "loss": 0.0975,
1137
+ "num_input_tokens_seen": 7874208,
1138
+ "step": 1410
1139
+ },
1140
+ {
1141
+ "epoch": 7.3976455199476785,
1142
+ "grad_norm": 0.39971715211868286,
1143
+ "learning_rate": 4.280802817124149e-05,
1144
+ "loss": 0.1012,
1145
+ "num_input_tokens_seen": 7930496,
1146
+ "step": 1420
1147
+ },
1148
+ {
1149
+ "epoch": 7.449967298888162,
1150
+ "grad_norm": 0.6476240754127502,
1151
+ "learning_rate": 4.271155949657959e-05,
1152
+ "loss": 0.1033,
1153
+ "num_input_tokens_seen": 7985552,
1154
+ "step": 1430
1155
+ },
1156
+ {
1157
+ "epoch": 7.502289077828646,
1158
+ "grad_norm": 0.5134351253509521,
1159
+ "learning_rate": 4.261455841190314e-05,
1160
+ "loss": 0.0966,
1161
+ "num_input_tokens_seen": 8041568,
1162
+ "step": 1440
1163
+ },
1164
+ {
1165
+ "epoch": 7.554610856769131,
1166
+ "grad_norm": 0.6232761144638062,
1167
+ "learning_rate": 4.2517027833067685e-05,
1168
+ "loss": 0.1001,
1169
+ "num_input_tokens_seen": 8098656,
1170
+ "step": 1450
1171
+ },
1172
+ {
1173
+ "epoch": 7.606932635709614,
1174
+ "grad_norm": 0.4906207025051117,
1175
+ "learning_rate": 4.241897069184537e-05,
1176
+ "loss": 0.1072,
1177
+ "num_input_tokens_seen": 8154032,
1178
+ "step": 1460
1179
+ },
1180
+ {
1181
+ "epoch": 7.659254414650098,
1182
+ "grad_norm": 0.6239527463912964,
1183
+ "learning_rate": 4.2320389935836836e-05,
1184
+ "loss": 0.1006,
1185
+ "num_input_tokens_seen": 8210032,
1186
+ "step": 1470
1187
+ },
1188
+ {
1189
+ "epoch": 7.711576193590582,
1190
+ "grad_norm": 1.2038999795913696,
1191
+ "learning_rate": 4.2221288528382584e-05,
1192
+ "loss": 0.1015,
1193
+ "num_input_tokens_seen": 8265296,
1194
+ "step": 1480
1195
+ },
1196
+ {
1197
+ "epoch": 7.763897972531066,
1198
+ "grad_norm": 0.365824431180954,
1199
+ "learning_rate": 4.212166944847392e-05,
1200
+ "loss": 0.0973,
1201
+ "num_input_tokens_seen": 8321840,
1202
+ "step": 1490
1203
+ },
1204
+ {
1205
+ "epoch": 7.81621975147155,
1206
+ "grad_norm": 0.4954347610473633,
1207
+ "learning_rate": 4.2021535690663414e-05,
1208
+ "loss": 0.1015,
1209
+ "num_input_tokens_seen": 8378064,
1210
+ "step": 1500
1211
+ },
1212
+ {
1213
+ "epoch": 7.868541530412034,
1214
+ "grad_norm": 0.5742694139480591,
1215
+ "learning_rate": 4.192089026497484e-05,
1216
+ "loss": 0.0997,
1217
+ "num_input_tokens_seen": 8434384,
1218
+ "step": 1510
1219
+ },
1220
+ {
1221
+ "epoch": 7.920863309352518,
1222
+ "grad_norm": 0.48765209317207336,
1223
+ "learning_rate": 4.181973619681276e-05,
1224
+ "loss": 0.1003,
1225
+ "num_input_tokens_seen": 8490496,
1226
+ "step": 1520
1227
+ },
1228
+ {
1229
+ "epoch": 7.973185088293002,
1230
+ "grad_norm": 0.4735221266746521,
1231
+ "learning_rate": 4.171807652687151e-05,
1232
+ "loss": 0.1051,
1233
+ "num_input_tokens_seen": 8545600,
1234
+ "step": 1530
1235
+ },
1236
+ {
1237
+ "epoch": 8.020928711576193,
1238
+ "grad_norm": 0.35401248931884766,
1239
+ "learning_rate": 4.1615914311043855e-05,
1240
+ "loss": 0.0967,
1241
+ "num_input_tokens_seen": 8595848,
1242
+ "step": 1540
1243
+ },
1244
+ {
1245
+ "epoch": 8.073250490516678,
1246
+ "grad_norm": 0.7497507929801941,
1247
+ "learning_rate": 4.151325262032908e-05,
1248
+ "loss": 0.0834,
1249
+ "num_input_tokens_seen": 8652536,
1250
+ "step": 1550
1251
+ },
1252
+ {
1253
+ "epoch": 8.125572269457162,
1254
+ "grad_norm": 0.7235395908355713,
1255
+ "learning_rate": 4.1410094540740726e-05,
1256
+ "loss": 0.0804,
1257
+ "num_input_tokens_seen": 8708952,
1258
+ "step": 1560
1259
+ },
1260
+ {
1261
+ "epoch": 8.177894048397645,
1262
+ "grad_norm": 0.9055391550064087,
1263
+ "learning_rate": 4.1306443173213785e-05,
1264
+ "loss": 0.085,
1265
+ "num_input_tokens_seen": 8765688,
1266
+ "step": 1570
1267
+ },
1268
+ {
1269
+ "epoch": 8.23021582733813,
1270
+ "grad_norm": 0.8378714919090271,
1271
+ "learning_rate": 4.1202301633511506e-05,
1272
+ "loss": 0.0813,
1273
+ "num_input_tokens_seen": 8822376,
1274
+ "step": 1580
1275
+ },
1276
+ {
1277
+ "epoch": 8.282537606278613,
1278
+ "grad_norm": 0.6385327577590942,
1279
+ "learning_rate": 4.109767305213173e-05,
1280
+ "loss": 0.0831,
1281
+ "num_input_tokens_seen": 8878456,
1282
+ "step": 1590
1283
+ },
1284
+ {
1285
+ "epoch": 8.334859385219097,
1286
+ "grad_norm": 0.8356183767318726,
1287
+ "learning_rate": 4.0992560574212764e-05,
1288
+ "loss": 0.0893,
1289
+ "num_input_tokens_seen": 8934088,
1290
+ "step": 1600
1291
+ },
1292
+ {
1293
+ "epoch": 8.387181164159582,
1294
+ "grad_norm": 0.8791753649711609,
1295
+ "learning_rate": 4.0886967359438885e-05,
1296
+ "loss": 0.087,
1297
+ "num_input_tokens_seen": 8990120,
1298
+ "step": 1610
1299
+ },
1300
+ {
1301
+ "epoch": 8.439502943100065,
1302
+ "grad_norm": 0.6865050792694092,
1303
+ "learning_rate": 4.078089658194533e-05,
1304
+ "loss": 0.0873,
1305
+ "num_input_tokens_seen": 9045848,
1306
+ "step": 1620
1307
+ },
1308
+ {
1309
+ "epoch": 8.49182472204055,
1310
+ "grad_norm": 0.7075687050819397,
1311
+ "learning_rate": 4.0674351430222864e-05,
1312
+ "loss": 0.0855,
1313
+ "num_input_tokens_seen": 9102456,
1314
+ "step": 1630
1315
+ },
1316
+ {
1317
+ "epoch": 8.544146500981034,
1318
+ "grad_norm": 0.7803745269775391,
1319
+ "learning_rate": 4.0567335107021986e-05,
1320
+ "loss": 0.0887,
1321
+ "num_input_tokens_seen": 9158712,
1322
+ "step": 1640
1323
+ },
1324
+ {
1325
+ "epoch": 8.596468279921517,
1326
+ "grad_norm": 0.6807383894920349,
1327
+ "learning_rate": 4.0459850829256604e-05,
1328
+ "loss": 0.0826,
1329
+ "num_input_tokens_seen": 9215496,
1330
+ "step": 1650
1331
+ },
1332
+ {
1333
+ "epoch": 8.648790058862001,
1334
+ "grad_norm": 0.777777373790741,
1335
+ "learning_rate": 4.035190182790738e-05,
1336
+ "loss": 0.0922,
1337
+ "num_input_tokens_seen": 9269720,
1338
+ "step": 1660
1339
+ },
1340
+ {
1341
+ "epoch": 8.701111837802486,
1342
+ "grad_norm": 0.6249901056289673,
1343
+ "learning_rate": 4.024349134792453e-05,
1344
+ "loss": 0.0862,
1345
+ "num_input_tokens_seen": 9325624,
1346
+ "step": 1670
1347
+ },
1348
+ {
1349
+ "epoch": 8.753433616742969,
1350
+ "grad_norm": 0.7085604667663574,
1351
+ "learning_rate": 4.0134622648130394e-05,
1352
+ "loss": 0.0836,
1353
+ "num_input_tokens_seen": 9380984,
1354
+ "step": 1680
1355
+ },
1356
+ {
1357
+ "epoch": 8.805755395683454,
1358
+ "grad_norm": 0.9964501261711121,
1359
+ "learning_rate": 4.0025299001121365e-05,
1360
+ "loss": 0.0862,
1361
+ "num_input_tokens_seen": 9437320,
1362
+ "step": 1690
1363
+ },
1364
+ {
1365
+ "epoch": 8.858077174623936,
1366
+ "grad_norm": 1.0215933322906494,
1367
+ "learning_rate": 3.991552369316958e-05,
1368
+ "loss": 0.0926,
1369
+ "num_input_tokens_seen": 9492888,
1370
+ "step": 1700
1371
+ },
1372
+ {
1373
+ "epoch": 8.910398953564421,
1374
+ "grad_norm": 0.7546895146369934,
1375
+ "learning_rate": 3.9805300024124125e-05,
1376
+ "loss": 0.0893,
1377
+ "num_input_tokens_seen": 9550504,
1378
+ "step": 1710
1379
+ },
1380
+ {
1381
+ "epoch": 8.962720732504906,
1382
+ "grad_norm": 0.8502705693244934,
1383
+ "learning_rate": 3.969463130731183e-05,
1384
+ "loss": 0.0948,
1385
+ "num_input_tokens_seen": 9606120,
1386
+ "step": 1720
1387
+ },
1388
+ {
1389
+ "epoch": 9.010464355788097,
1390
+ "grad_norm": 0.5291112065315247,
1391
+ "learning_rate": 3.9583520869437666e-05,
1392
+ "loss": 0.0893,
1393
+ "num_input_tokens_seen": 9656472,
1394
+ "step": 1730
1395
+ },
1396
+ {
1397
+ "epoch": 9.062786134728581,
1398
+ "grad_norm": 0.6509256958961487,
1399
+ "learning_rate": 3.9471972050484764e-05,
1400
+ "loss": 0.063,
1401
+ "num_input_tokens_seen": 9712712,
1402
+ "step": 1740
1403
+ },
1404
+ {
1405
+ "epoch": 9.115107913669064,
1406
+ "grad_norm": 1.1916331052780151,
1407
+ "learning_rate": 3.9359988203614e-05,
1408
+ "loss": 0.0602,
1409
+ "num_input_tokens_seen": 9768584,
1410
+ "step": 1750
1411
+ },
1412
+ {
1413
+ "epoch": 9.167429692609549,
1414
+ "grad_norm": 0.8077856302261353,
1415
+ "learning_rate": 3.924757269506319e-05,
1416
+ "loss": 0.0643,
1417
+ "num_input_tokens_seen": 9824360,
1418
+ "step": 1760
1419
+ },
1420
+ {
1421
+ "epoch": 9.219751471550033,
1422
+ "grad_norm": 1.0554853677749634,
1423
+ "learning_rate": 3.913472890404593e-05,
1424
+ "loss": 0.0668,
1425
+ "num_input_tokens_seen": 9880328,
1426
+ "step": 1770
1427
+ },
1428
+ {
1429
+ "epoch": 9.272073250490516,
1430
+ "grad_norm": 0.8842382431030273,
1431
+ "learning_rate": 3.9021460222649986e-05,
1432
+ "loss": 0.0633,
1433
+ "num_input_tokens_seen": 9936248,
1434
+ "step": 1780
1435
+ },
1436
+ {
1437
+ "epoch": 9.324395029431,
1438
+ "grad_norm": 0.9562489986419678,
1439
+ "learning_rate": 3.890777005573537e-05,
1440
+ "loss": 0.0625,
1441
+ "num_input_tokens_seen": 9992088,
1442
+ "step": 1790
1443
+ },
1444
+ {
1445
+ "epoch": 9.376716808371485,
1446
+ "grad_norm": 0.8844836950302124,
1447
+ "learning_rate": 3.8793661820831915e-05,
1448
+ "loss": 0.0718,
1449
+ "num_input_tokens_seen": 10048520,
1450
+ "step": 1800
1451
+ },
1452
+ {
1453
+ "epoch": 9.429038587311968,
1454
+ "grad_norm": 1.2325676679611206,
1455
+ "learning_rate": 3.867913894803663e-05,
1456
+ "loss": 0.0646,
1457
+ "num_input_tokens_seen": 10104776,
1458
+ "step": 1810
1459
+ },
1460
+ {
1461
+ "epoch": 9.481360366252453,
1462
+ "grad_norm": 0.7227656245231628,
1463
+ "learning_rate": 3.8564204879910535e-05,
1464
+ "loss": 0.0657,
1465
+ "num_input_tokens_seen": 10160888,
1466
+ "step": 1820
1467
+ },
1468
+ {
1469
+ "epoch": 9.533682145192937,
1470
+ "grad_norm": 0.9736649990081787,
1471
+ "learning_rate": 3.844886307137519e-05,
1472
+ "loss": 0.0691,
1473
+ "num_input_tokens_seen": 10216920,
1474
+ "step": 1830
1475
+ },
1476
+ {
1477
+ "epoch": 9.58600392413342,
1478
+ "grad_norm": 1.0427885055541992,
1479
+ "learning_rate": 3.833311698960888e-05,
1480
+ "loss": 0.0667,
1481
+ "num_input_tokens_seen": 10273880,
1482
+ "step": 1840
1483
+ },
1484
+ {
1485
+ "epoch": 9.638325703073905,
1486
+ "grad_norm": 0.9749732613563538,
1487
+ "learning_rate": 3.8216970113942284e-05,
1488
+ "loss": 0.0699,
1489
+ "num_input_tokens_seen": 10329752,
1490
+ "step": 1850
1491
+ },
1492
+ {
1493
+ "epoch": 9.690647482014388,
1494
+ "grad_norm": 0.6216735243797302,
1495
+ "learning_rate": 3.8100425935754025e-05,
1496
+ "loss": 0.072,
1497
+ "num_input_tokens_seen": 10387128,
1498
+ "step": 1860
1499
+ },
1500
+ {
1501
+ "epoch": 9.742969260954872,
1502
+ "grad_norm": 0.8047035932540894,
1503
+ "learning_rate": 3.798348795836562e-05,
1504
+ "loss": 0.0697,
1505
+ "num_input_tokens_seen": 10443144,
1506
+ "step": 1870
1507
+ },
1508
+ {
1509
+ "epoch": 9.795291039895357,
1510
+ "grad_norm": 0.793222188949585,
1511
+ "learning_rate": 3.786615969693621e-05,
1512
+ "loss": 0.0709,
1513
+ "num_input_tokens_seen": 10497784,
1514
+ "step": 1880
1515
+ },
1516
+ {
1517
+ "epoch": 9.84761281883584,
1518
+ "grad_norm": 0.7294532656669617,
1519
+ "learning_rate": 3.7748444678356886e-05,
1520
+ "loss": 0.0716,
1521
+ "num_input_tokens_seen": 10553416,
1522
+ "step": 1890
1523
+ },
1524
+ {
1525
+ "epoch": 9.899934597776324,
1526
+ "grad_norm": 1.014732003211975,
1527
+ "learning_rate": 3.7630346441144656e-05,
1528
+ "loss": 0.0695,
1529
+ "num_input_tokens_seen": 10610168,
1530
+ "step": 1900
1531
+ },
1532
+ {
1533
+ "epoch": 9.952256376716809,
1534
+ "grad_norm": 1.22804856300354,
1535
+ "learning_rate": 3.7511868535336134e-05,
1536
+ "loss": 0.0731,
1537
+ "num_input_tokens_seen": 10665912,
1538
+ "step": 1910
1539
+ },
1540
+ {
1541
+ "epoch": 10.0,
1542
+ "grad_norm": 0.7104299068450928,
1543
+ "learning_rate": 3.7393014522380734e-05,
1544
+ "loss": 0.0652,
1545
+ "num_input_tokens_seen": 10717424,
1546
+ "step": 1920
1547
+ },
1548
+ {
1549
+ "epoch": 10.052321778940485,
1550
+ "grad_norm": 0.9654238224029541,
1551
+ "learning_rate": 3.7273787975033686e-05,
1552
+ "loss": 0.0435,
1553
+ "num_input_tokens_seen": 10772656,
1554
+ "step": 1930
1555
+ },
1556
+ {
1557
+ "epoch": 10.104643557880967,
1558
+ "grad_norm": 1.2412711381912231,
1559
+ "learning_rate": 3.7154192477248614e-05,
1560
+ "loss": 0.0425,
1561
+ "num_input_tokens_seen": 10829200,
1562
+ "step": 1940
1563
+ },
1564
+ {
1565
+ "epoch": 10.156965336821452,
1566
+ "grad_norm": 0.6960983276367188,
1567
+ "learning_rate": 3.7034231624069796e-05,
1568
+ "loss": 0.0376,
1569
+ "num_input_tokens_seen": 10884064,
1570
+ "step": 1950
1571
+ },
1572
+ {
1573
+ "epoch": 10.209287115761937,
1574
+ "grad_norm": 1.214837908744812,
1575
+ "learning_rate": 3.691390902152412e-05,
1576
+ "loss": 0.0487,
1577
+ "num_input_tokens_seen": 10939824,
1578
+ "step": 1960
1579
+ },
1580
+ {
1581
+ "epoch": 10.26160889470242,
1582
+ "grad_norm": 0.7800888419151306,
1583
+ "learning_rate": 3.679322828651265e-05,
1584
+ "loss": 0.043,
1585
+ "num_input_tokens_seen": 10996128,
1586
+ "step": 1970
1587
+ },
1588
+ {
1589
+ "epoch": 10.313930673642904,
1590
+ "grad_norm": 1.8462598323822021,
1591
+ "learning_rate": 3.667219304670193e-05,
1592
+ "loss": 0.0454,
1593
+ "num_input_tokens_seen": 11053280,
1594
+ "step": 1980
1595
+ },
1596
+ {
1597
+ "epoch": 10.366252452583387,
1598
+ "grad_norm": 1.075537085533142,
1599
+ "learning_rate": 3.655080694041495e-05,
1600
+ "loss": 0.0479,
1601
+ "num_input_tokens_seen": 11109664,
1602
+ "step": 1990
1603
+ },
1604
+ {
1605
+ "epoch": 10.418574231523872,
1606
+ "grad_norm": 0.7474583983421326,
1607
+ "learning_rate": 3.642907361652172e-05,
1608
+ "loss": 0.0458,
1609
+ "num_input_tokens_seen": 11166016,
1610
+ "step": 2000
1611
+ },
1612
+ {
1613
+ "epoch": 10.470896010464356,
1614
+ "grad_norm": 1.0032005310058594,
1615
+ "learning_rate": 3.6306996734329656e-05,
1616
+ "loss": 0.0518,
1617
+ "num_input_tokens_seen": 11221648,
1618
+ "step": 2010
1619
+ },
1620
+ {
1621
+ "epoch": 10.523217789404839,
1622
+ "grad_norm": 1.0080397129058838,
1623
+ "learning_rate": 3.618457996347352e-05,
1624
+ "loss": 0.0501,
1625
+ "num_input_tokens_seen": 11277952,
1626
+ "step": 2020
1627
+ },
1628
+ {
1629
+ "epoch": 10.575539568345324,
1630
+ "grad_norm": 1.130409598350525,
1631
+ "learning_rate": 3.606182698380515e-05,
1632
+ "loss": 0.0476,
1633
+ "num_input_tokens_seen": 11334272,
1634
+ "step": 2030
1635
+ },
1636
+ {
1637
+ "epoch": 10.627861347285808,
1638
+ "grad_norm": 0.9204115867614746,
1639
+ "learning_rate": 3.593874148528284e-05,
1640
+ "loss": 0.0515,
1641
+ "num_input_tokens_seen": 11389760,
1642
+ "step": 2040
1643
+ },
1644
+ {
1645
+ "epoch": 10.680183126226291,
1646
+ "grad_norm": 0.8044779896736145,
1647
+ "learning_rate": 3.58153271678604e-05,
1648
+ "loss": 0.0477,
1649
+ "num_input_tokens_seen": 11446144,
1650
+ "step": 2050
1651
+ },
1652
+ {
1653
+ "epoch": 10.732504905166776,
1654
+ "grad_norm": 1.4428260326385498,
1655
+ "learning_rate": 3.5691587741375934e-05,
1656
+ "loss": 0.0488,
1657
+ "num_input_tokens_seen": 11502320,
1658
+ "step": 2060
1659
+ },
1660
+ {
1661
+ "epoch": 10.78482668410726,
1662
+ "grad_norm": 0.9769074320793152,
1663
+ "learning_rate": 3.5567526925440353e-05,
1664
+ "loss": 0.0565,
1665
+ "num_input_tokens_seen": 11559392,
1666
+ "step": 2070
1667
+ },
1668
+ {
1669
+ "epoch": 10.837148463047743,
1670
+ "grad_norm": 0.6837288737297058,
1671
+ "learning_rate": 3.5443148449325545e-05,
1672
+ "loss": 0.051,
1673
+ "num_input_tokens_seen": 11615824,
1674
+ "step": 2080
1675
+ },
1676
+ {
1677
+ "epoch": 10.889470241988228,
1678
+ "grad_norm": 1.1239960193634033,
1679
+ "learning_rate": 3.5318456051852264e-05,
1680
+ "loss": 0.0528,
1681
+ "num_input_tokens_seen": 11671968,
1682
+ "step": 2090
1683
+ },
1684
+ {
1685
+ "epoch": 10.941792020928713,
1686
+ "grad_norm": 1.2493700981140137,
1687
+ "learning_rate": 3.519345348127775e-05,
1688
+ "loss": 0.0479,
1689
+ "num_input_tokens_seen": 11727632,
1690
+ "step": 2100
1691
+ },
1692
+ {
1693
+ "epoch": 10.994113799869195,
1694
+ "grad_norm": 1.0371160507202148,
1695
+ "learning_rate": 3.506814449518306e-05,
1696
+ "loss": 0.0446,
1697
+ "num_input_tokens_seen": 11784032,
1698
+ "step": 2110
1699
+ },
1700
+ {
1701
+ "epoch": 11.041857423152388,
1702
+ "grad_norm": 0.6289274096488953,
1703
+ "learning_rate": 3.494253286036011e-05,
1704
+ "loss": 0.0392,
1705
+ "num_input_tokens_seen": 11835760,
1706
+ "step": 2120
1707
+ },
1708
+ {
1709
+ "epoch": 11.094179202092871,
1710
+ "grad_norm": 1.0801132917404175,
1711
+ "learning_rate": 3.481662235269844e-05,
1712
+ "loss": 0.0311,
1713
+ "num_input_tokens_seen": 11891584,
1714
+ "step": 2130
1715
+ },
1716
+ {
1717
+ "epoch": 11.146500981033356,
1718
+ "grad_norm": 0.7098966836929321,
1719
+ "learning_rate": 3.469041675707173e-05,
1720
+ "loss": 0.0253,
1721
+ "num_input_tokens_seen": 11947824,
1722
+ "step": 2140
1723
+ },
1724
+ {
1725
+ "epoch": 11.198822759973838,
1726
+ "grad_norm": 0.9716883897781372,
1727
+ "learning_rate": 3.4563919867224e-05,
1728
+ "loss": 0.0313,
1729
+ "num_input_tokens_seen": 12003328,
1730
+ "step": 2150
1731
+ },
1732
+ {
1733
+ "epoch": 11.251144538914323,
1734
+ "grad_norm": 1.6546238660812378,
1735
+ "learning_rate": 3.4437135485655575e-05,
1736
+ "loss": 0.0339,
1737
+ "num_input_tokens_seen": 12060512,
1738
+ "step": 2160
1739
+ },
1740
+ {
1741
+ "epoch": 11.303466317854808,
1742
+ "grad_norm": 0.8873046040534973,
1743
+ "learning_rate": 3.4310067423508815e-05,
1744
+ "loss": 0.0344,
1745
+ "num_input_tokens_seen": 12117584,
1746
+ "step": 2170
1747
+ },
1748
+ {
1749
+ "epoch": 11.35578809679529,
1750
+ "grad_norm": 1.2070609331130981,
1751
+ "learning_rate": 3.418271950045352e-05,
1752
+ "loss": 0.0274,
1753
+ "num_input_tokens_seen": 12172512,
1754
+ "step": 2180
1755
+ },
1756
+ {
1757
+ "epoch": 11.408109875735775,
1758
+ "grad_norm": 1.0395190715789795,
1759
+ "learning_rate": 3.405509554457211e-05,
1760
+ "loss": 0.0268,
1761
+ "num_input_tokens_seen": 12227744,
1762
+ "step": 2190
1763
+ },
1764
+ {
1765
+ "epoch": 11.46043165467626,
1766
+ "grad_norm": 1.2013176679611206,
1767
+ "learning_rate": 3.392719939224453e-05,
1768
+ "loss": 0.0363,
1769
+ "num_input_tokens_seen": 12283776,
1770
+ "step": 2200
1771
+ },
1772
+ {
1773
+ "epoch": 11.512753433616743,
1774
+ "grad_norm": 1.1089166402816772,
1775
+ "learning_rate": 3.379903488803304e-05,
1776
+ "loss": 0.0339,
1777
+ "num_input_tokens_seen": 12340288,
1778
+ "step": 2210
1779
+ },
1780
+ {
1781
+ "epoch": 11.565075212557227,
1782
+ "grad_norm": 1.4544755220413208,
1783
+ "learning_rate": 3.3670605884566484e-05,
1784
+ "loss": 0.0325,
1785
+ "num_input_tokens_seen": 12396368,
1786
+ "step": 2220
1787
+ },
1788
+ {
1789
+ "epoch": 11.61739699149771,
1790
+ "grad_norm": 1.0953842401504517,
1791
+ "learning_rate": 3.3541916242424606e-05,
1792
+ "loss": 0.0333,
1793
+ "num_input_tokens_seen": 12451872,
1794
+ "step": 2230
1795
+ },
1796
+ {
1797
+ "epoch": 11.669718770438195,
1798
+ "grad_norm": 0.7939156889915466,
1799
+ "learning_rate": 3.341296983002193e-05,
1800
+ "loss": 0.0336,
1801
+ "num_input_tokens_seen": 12507776,
1802
+ "step": 2240
1803
+ },
1804
+ {
1805
+ "epoch": 11.72204054937868,
1806
+ "grad_norm": 1.114825963973999,
1807
+ "learning_rate": 3.3283770523491535e-05,
1808
+ "loss": 0.0357,
1809
+ "num_input_tokens_seen": 12564320,
1810
+ "step": 2250
1811
+ },
1812
+ {
1813
+ "epoch": 11.774362328319162,
1814
+ "grad_norm": 1.0780022144317627,
1815
+ "learning_rate": 3.3154322206568475e-05,
1816
+ "loss": 0.0356,
1817
+ "num_input_tokens_seen": 12620912,
1818
+ "step": 2260
1819
+ },
1820
+ {
1821
+ "epoch": 11.826684107259647,
1822
+ "grad_norm": 0.9889864325523376,
1823
+ "learning_rate": 3.302462877047307e-05,
1824
+ "loss": 0.0318,
1825
+ "num_input_tokens_seen": 12676464,
1826
+ "step": 2270
1827
+ },
1828
+ {
1829
+ "epoch": 11.879005886200131,
1830
+ "grad_norm": 1.8913953304290771,
1831
+ "learning_rate": 3.2894694113793935e-05,
1832
+ "loss": 0.039,
1833
+ "num_input_tokens_seen": 12731408,
1834
+ "step": 2280
1835
+ },
1836
+ {
1837
+ "epoch": 11.931327665140614,
1838
+ "grad_norm": 0.854158341884613,
1839
+ "learning_rate": 3.27645221423708e-05,
1840
+ "loss": 0.0408,
1841
+ "num_input_tokens_seen": 12787552,
1842
+ "step": 2290
1843
+ },
1844
+ {
1845
+ "epoch": 11.983649444081099,
1846
+ "grad_norm": 0.7944401502609253,
1847
+ "learning_rate": 3.263411676917704e-05,
1848
+ "loss": 0.034,
1849
+ "num_input_tokens_seen": 12843808,
1850
+ "step": 2300
1851
+ },
1852
+ {
1853
+ "epoch": 12.03139306736429,
1854
+ "grad_norm": 1.3224999904632568,
1855
+ "learning_rate": 3.250348191420214e-05,
1856
+ "loss": 0.03,
1857
+ "num_input_tokens_seen": 12895184,
1858
+ "step": 2310
1859
+ },
1860
+ {
1861
+ "epoch": 12.083714846304774,
1862
+ "grad_norm": 0.8005937933921814,
1863
+ "learning_rate": 3.237262150433379e-05,
1864
+ "loss": 0.0219,
1865
+ "num_input_tokens_seen": 12951408,
1866
+ "step": 2320
1867
+ },
1868
+ {
1869
+ "epoch": 12.136036625245259,
1870
+ "grad_norm": 1.3628501892089844,
1871
+ "learning_rate": 3.224153947323987e-05,
1872
+ "loss": 0.0181,
1873
+ "num_input_tokens_seen": 13007776,
1874
+ "step": 2330
1875
+ },
1876
+ {
1877
+ "epoch": 12.188358404185742,
1878
+ "grad_norm": 0.7954509258270264,
1879
+ "learning_rate": 3.21102397612502e-05,
1880
+ "loss": 0.0183,
1881
+ "num_input_tokens_seen": 13064144,
1882
+ "step": 2340
1883
+ },
1884
+ {
1885
+ "epoch": 12.240680183126226,
1886
+ "grad_norm": 0.8565235733985901,
1887
+ "learning_rate": 3.1978726315238094e-05,
1888
+ "loss": 0.0183,
1889
+ "num_input_tokens_seen": 13120320,
1890
+ "step": 2350
1891
+ },
1892
+ {
1893
+ "epoch": 12.293001962066711,
1894
+ "grad_norm": 0.7555674910545349,
1895
+ "learning_rate": 3.1847003088501726e-05,
1896
+ "loss": 0.017,
1897
+ "num_input_tokens_seen": 13177168,
1898
+ "step": 2360
1899
+ },
1900
+ {
1901
+ "epoch": 12.345323741007194,
1902
+ "grad_norm": 0.7122445106506348,
1903
+ "learning_rate": 3.1715074040645275e-05,
1904
+ "loss": 0.0206,
1905
+ "num_input_tokens_seen": 13232784,
1906
+ "step": 2370
1907
+ },
1908
+ {
1909
+ "epoch": 12.397645519947678,
1910
+ "grad_norm": 0.9428816437721252,
1911
+ "learning_rate": 3.158294313745992e-05,
1912
+ "loss": 0.0194,
1913
+ "num_input_tokens_seen": 13287312,
1914
+ "step": 2380
1915
+ },
1916
+ {
1917
+ "epoch": 12.449967298888161,
1918
+ "grad_norm": 1.027761459350586,
1919
+ "learning_rate": 3.145061435080461e-05,
1920
+ "loss": 0.0165,
1921
+ "num_input_tokens_seen": 13343616,
1922
+ "step": 2390
1923
+ },
1924
+ {
1925
+ "epoch": 12.502289077828646,
1926
+ "grad_norm": 0.9591146111488342,
1927
+ "learning_rate": 3.1318091658486655e-05,
1928
+ "loss": 0.0192,
1929
+ "num_input_tokens_seen": 13398656,
1930
+ "step": 2400
1931
+ },
1932
+ {
1933
+ "epoch": 12.55461085676913,
1934
+ "grad_norm": 2.0098116397857666,
1935
+ "learning_rate": 3.1185379044142225e-05,
1936
+ "loss": 0.0202,
1937
+ "num_input_tokens_seen": 13453888,
1938
+ "step": 2410
1939
+ },
1940
+ {
1941
+ "epoch": 12.606932635709613,
1942
+ "grad_norm": 1.243646502494812,
1943
+ "learning_rate": 3.105248049711651e-05,
1944
+ "loss": 0.0184,
1945
+ "num_input_tokens_seen": 13511168,
1946
+ "step": 2420
1947
+ },
1948
+ {
1949
+ "epoch": 12.659254414650098,
1950
+ "grad_norm": 0.7831906676292419,
1951
+ "learning_rate": 3.091940001234386e-05,
1952
+ "loss": 0.0215,
1953
+ "num_input_tokens_seen": 13567168,
1954
+ "step": 2430
1955
+ },
1956
+ {
1957
+ "epoch": 12.711576193590583,
1958
+ "grad_norm": 0.6232236623764038,
1959
+ "learning_rate": 3.078614159022767e-05,
1960
+ "loss": 0.0192,
1961
+ "num_input_tokens_seen": 13623200,
1962
+ "step": 2440
1963
+ },
1964
+ {
1965
+ "epoch": 12.763897972531066,
1966
+ "grad_norm": 1.3829624652862549,
1967
+ "learning_rate": 3.065270923652015e-05,
1968
+ "loss": 0.0222,
1969
+ "num_input_tokens_seen": 13678880,
1970
+ "step": 2450
1971
+ },
1972
+ {
1973
+ "epoch": 12.81621975147155,
1974
+ "grad_norm": 0.9216393232345581,
1975
+ "learning_rate": 3.051910696220188e-05,
1976
+ "loss": 0.0159,
1977
+ "num_input_tokens_seen": 13734624,
1978
+ "step": 2460
1979
+ },
1980
+ {
1981
+ "epoch": 12.868541530412035,
1982
+ "grad_norm": 1.1284723281860352,
1983
+ "learning_rate": 3.0385338783361283e-05,
1984
+ "loss": 0.0248,
1985
+ "num_input_tokens_seen": 13790576,
1986
+ "step": 2470
1987
+ },
1988
+ {
1989
+ "epoch": 12.920863309352518,
1990
+ "grad_norm": 1.4107064008712769,
1991
+ "learning_rate": 3.025140872107386e-05,
1992
+ "loss": 0.0227,
1993
+ "num_input_tokens_seen": 13845984,
1994
+ "step": 2480
1995
+ },
1996
+ {
1997
+ "epoch": 12.973185088293002,
1998
+ "grad_norm": 0.7538278102874756,
1999
+ "learning_rate": 3.0117320801281335e-05,
2000
+ "loss": 0.0265,
2001
+ "num_input_tokens_seen": 13902400,
2002
+ "step": 2490
2003
+ },
2004
+ {
2005
+ "epoch": 13.020928711576193,
2006
+ "grad_norm": 0.9580565690994263,
2007
+ "learning_rate": 2.9983079054670627e-05,
2008
+ "loss": 0.0195,
2009
+ "num_input_tokens_seen": 13953344,
2010
+ "step": 2500
2011
+ },
2012
+ {
2013
+ "epoch": 13.073250490516678,
2014
+ "grad_norm": 0.4210267961025238,
2015
+ "learning_rate": 2.9848687516552725e-05,
2016
+ "loss": 0.0107,
2017
+ "num_input_tokens_seen": 14009424,
2018
+ "step": 2510
2019
+ },
2020
+ {
2021
+ "epoch": 13.125572269457162,
2022
+ "grad_norm": 0.534055233001709,
2023
+ "learning_rate": 2.9714150226741312e-05,
2024
+ "loss": 0.013,
2025
+ "num_input_tokens_seen": 14064880,
2026
+ "step": 2520
2027
+ },
2028
+ {
2029
+ "epoch": 13.177894048397645,
2030
+ "grad_norm": 0.8242612481117249,
2031
+ "learning_rate": 2.9579471229431394e-05,
2032
+ "loss": 0.0095,
2033
+ "num_input_tokens_seen": 14120896,
2034
+ "step": 2530
2035
+ },
2036
+ {
2037
+ "epoch": 13.23021582733813,
2038
+ "grad_norm": 1.1644961833953857,
2039
+ "learning_rate": 2.944465457307771e-05,
2040
+ "loss": 0.0125,
2041
+ "num_input_tokens_seen": 14176512,
2042
+ "step": 2540
2043
+ },
2044
+ {
2045
+ "epoch": 13.282537606278613,
2046
+ "grad_norm": 0.682830274105072,
2047
+ "learning_rate": 2.930970431027304e-05,
2048
+ "loss": 0.0106,
2049
+ "num_input_tokens_seen": 14232608,
2050
+ "step": 2550
2051
+ },
2052
+ {
2053
+ "epoch": 13.334859385219097,
2054
+ "grad_norm": 0.7904958724975586,
2055
+ "learning_rate": 2.9174624497626353e-05,
2056
+ "loss": 0.012,
2057
+ "num_input_tokens_seen": 14289360,
2058
+ "step": 2560
2059
+ },
2060
+ {
2061
+ "epoch": 13.387181164159582,
2062
+ "grad_norm": 0.8092711567878723,
2063
+ "learning_rate": 2.903941919564091e-05,
2064
+ "loss": 0.0124,
2065
+ "num_input_tokens_seen": 14346096,
2066
+ "step": 2570
2067
+ },
2068
+ {
2069
+ "epoch": 13.439502943100065,
2070
+ "grad_norm": 0.4784017503261566,
2071
+ "learning_rate": 2.8904092468592187e-05,
2072
+ "loss": 0.0132,
2073
+ "num_input_tokens_seen": 14401872,
2074
+ "step": 2580
2075
+ },
2076
+ {
2077
+ "epoch": 13.49182472204055,
2078
+ "grad_norm": 0.5194114446640015,
2079
+ "learning_rate": 2.8768648384405695e-05,
2080
+ "loss": 0.0101,
2081
+ "num_input_tokens_seen": 14458864,
2082
+ "step": 2590
2083
+ },
2084
+ {
2085
+ "epoch": 13.544146500981034,
2086
+ "grad_norm": 0.6601864099502563,
2087
+ "learning_rate": 2.863309101453469e-05,
2088
+ "loss": 0.0135,
2089
+ "num_input_tokens_seen": 14515664,
2090
+ "step": 2600
2091
+ },
2092
+ {
2093
+ "epoch": 13.596468279921517,
2094
+ "grad_norm": 0.9567685723304749,
2095
+ "learning_rate": 2.8497424433837833e-05,
2096
+ "loss": 0.0138,
2097
+ "num_input_tokens_seen": 14572256,
2098
+ "step": 2610
2099
+ },
2100
+ {
2101
+ "epoch": 13.648790058862001,
2102
+ "grad_norm": 0.5563291311264038,
2103
+ "learning_rate": 2.836165272045663e-05,
2104
+ "loss": 0.0132,
2105
+ "num_input_tokens_seen": 14627248,
2106
+ "step": 2620
2107
+ },
2108
+ {
2109
+ "epoch": 13.701111837802486,
2110
+ "grad_norm": 0.9716143608093262,
2111
+ "learning_rate": 2.8225779955692905e-05,
2112
+ "loss": 0.0134,
2113
+ "num_input_tokens_seen": 14683728,
2114
+ "step": 2630
2115
+ },
2116
+ {
2117
+ "epoch": 13.753433616742969,
2118
+ "grad_norm": 0.9606854915618896,
2119
+ "learning_rate": 2.8089810223886076e-05,
2120
+ "loss": 0.0154,
2121
+ "num_input_tokens_seen": 14740864,
2122
+ "step": 2640
2123
+ },
2124
+ {
2125
+ "epoch": 13.805755395683454,
2126
+ "grad_norm": 1.01091730594635,
2127
+ "learning_rate": 2.79537476122904e-05,
2128
+ "loss": 0.0121,
2129
+ "num_input_tokens_seen": 14796176,
2130
+ "step": 2650
2131
+ },
2132
+ {
2133
+ "epoch": 13.858077174623936,
2134
+ "grad_norm": 0.6134788990020752,
2135
+ "learning_rate": 2.781759621095209e-05,
2136
+ "loss": 0.0119,
2137
+ "num_input_tokens_seen": 14852304,
2138
+ "step": 2660
2139
+ },
2140
+ {
2141
+ "epoch": 13.910398953564421,
2142
+ "grad_norm": 1.1731514930725098,
2143
+ "learning_rate": 2.7681360112586403e-05,
2144
+ "loss": 0.0188,
2145
+ "num_input_tokens_seen": 14908624,
2146
+ "step": 2670
2147
+ },
2148
+ {
2149
+ "epoch": 13.962720732504906,
2150
+ "grad_norm": 0.7572017908096313,
2151
+ "learning_rate": 2.7545043412454568e-05,
2152
+ "loss": 0.0153,
2153
+ "num_input_tokens_seen": 14964784,
2154
+ "step": 2680
2155
+ },
2156
+ {
2157
+ "epoch": 14.010464355788097,
2158
+ "grad_norm": 0.2877230942249298,
2159
+ "learning_rate": 2.7408650208240733e-05,
2160
+ "loss": 0.0093,
2161
+ "num_input_tokens_seen": 15016112,
2162
+ "step": 2690
2163
+ },
2164
+ {
2165
+ "epoch": 14.062786134728581,
2166
+ "grad_norm": 1.2057104110717773,
2167
+ "learning_rate": 2.7272184599928723e-05,
2168
+ "loss": 0.006,
2169
+ "num_input_tokens_seen": 15072240,
2170
+ "step": 2700
2171
+ },
2172
+ {
2173
+ "epoch": 14.115107913669064,
2174
+ "grad_norm": 0.2993405759334564,
2175
+ "learning_rate": 2.7135650689678873e-05,
2176
+ "loss": 0.0082,
2177
+ "num_input_tokens_seen": 15128432,
2178
+ "step": 2710
2179
+ },
2180
+ {
2181
+ "epoch": 14.167429692609549,
2182
+ "grad_norm": 0.4322413504123688,
2183
+ "learning_rate": 2.6999052581704643e-05,
2184
+ "loss": 0.0052,
2185
+ "num_input_tokens_seen": 15185232,
2186
+ "step": 2720
2187
+ },
2188
+ {
2189
+ "epoch": 14.219751471550033,
2190
+ "grad_norm": 0.4944108724594116,
2191
+ "learning_rate": 2.6862394382149308e-05,
2192
+ "loss": 0.0066,
2193
+ "num_input_tokens_seen": 15241040,
2194
+ "step": 2730
2195
+ },
2196
+ {
2197
+ "epoch": 14.272073250490516,
2198
+ "grad_norm": 0.6533095836639404,
2199
+ "learning_rate": 2.672568019896248e-05,
2200
+ "loss": 0.0088,
2201
+ "num_input_tokens_seen": 15297904,
2202
+ "step": 2740
2203
+ },
2204
+ {
2205
+ "epoch": 14.324395029431,
2206
+ "grad_norm": 0.316057026386261,
2207
+ "learning_rate": 2.6588914141776626e-05,
2208
+ "loss": 0.0056,
2209
+ "num_input_tokens_seen": 15355584,
2210
+ "step": 2750
2211
+ },
2212
+ {
2213
+ "epoch": 14.376716808371485,
2214
+ "grad_norm": 0.502487063407898,
2215
+ "learning_rate": 2.6452100321783585e-05,
2216
+ "loss": 0.0029,
2217
+ "num_input_tokens_seen": 15410592,
2218
+ "step": 2760
2219
+ },
2220
+ {
2221
+ "epoch": 14.429038587311968,
2222
+ "grad_norm": 0.5012995004653931,
2223
+ "learning_rate": 2.6315242851610923e-05,
2224
+ "loss": 0.0109,
2225
+ "num_input_tokens_seen": 15466448,
2226
+ "step": 2770
2227
+ },
2228
+ {
2229
+ "epoch": 14.481360366252453,
2230
+ "grad_norm": 0.8451622128486633,
2231
+ "learning_rate": 2.6178345845198328e-05,
2232
+ "loss": 0.0057,
2233
+ "num_input_tokens_seen": 15522816,
2234
+ "step": 2780
2235
+ },
2236
+ {
2237
+ "epoch": 14.533682145192937,
2238
+ "grad_norm": 0.33364975452423096,
2239
+ "learning_rate": 2.6041413417673966e-05,
2240
+ "loss": 0.009,
2241
+ "num_input_tokens_seen": 15578672,
2242
+ "step": 2790
2243
+ },
2244
+ {
2245
+ "epoch": 14.58600392413342,
2246
+ "grad_norm": 0.3638412058353424,
2247
+ "learning_rate": 2.590444968523074e-05,
2248
+ "loss": 0.0089,
2249
+ "num_input_tokens_seen": 15635408,
2250
+ "step": 2800
2251
+ },
2252
+ {
2253
+ "epoch": 14.638325703073905,
2254
+ "grad_norm": 0.5961637496948242,
2255
+ "learning_rate": 2.5767458765002606e-05,
2256
+ "loss": 0.008,
2257
+ "num_input_tokens_seen": 15691648,
2258
+ "step": 2810
2259
+ },
2260
+ {
2261
+ "epoch": 14.690647482014388,
2262
+ "grad_norm": 0.7401494979858398,
2263
+ "learning_rate": 2.5630444774940765e-05,
2264
+ "loss": 0.0081,
2265
+ "num_input_tokens_seen": 15748032,
2266
+ "step": 2820
2267
+ },
2268
+ {
2269
+ "epoch": 14.742969260954872,
2270
+ "grad_norm": 0.18349328637123108,
2271
+ "learning_rate": 2.5493411833689907e-05,
2272
+ "loss": 0.0071,
2273
+ "num_input_tokens_seen": 15803232,
2274
+ "step": 2830
2275
+ },
2276
+ {
2277
+ "epoch": 14.795291039895357,
2278
+ "grad_norm": 1.5877436399459839,
2279
+ "learning_rate": 2.5356364060464398e-05,
2280
+ "loss": 0.0078,
2281
+ "num_input_tokens_seen": 15859120,
2282
+ "step": 2840
2283
+ },
2284
+ {
2285
+ "epoch": 14.84761281883584,
2286
+ "grad_norm": 1.041685938835144,
2287
+ "learning_rate": 2.521930557492444e-05,
2288
+ "loss": 0.0089,
2289
+ "num_input_tokens_seen": 15915872,
2290
+ "step": 2850
2291
+ },
2292
+ {
2293
+ "epoch": 14.899934597776324,
2294
+ "grad_norm": 0.6710309982299805,
2295
+ "learning_rate": 2.5082240497052267e-05,
2296
+ "loss": 0.0088,
2297
+ "num_input_tokens_seen": 15973472,
2298
+ "step": 2860
2299
+ },
2300
+ {
2301
+ "epoch": 14.952256376716809,
2302
+ "grad_norm": 0.9839669466018677,
2303
+ "learning_rate": 2.494517294702826e-05,
2304
+ "loss": 0.0069,
2305
+ "num_input_tokens_seen": 16029920,
2306
+ "step": 2870
2307
+ },
2308
+ {
2309
+ "epoch": 15.0,
2310
+ "grad_norm": 1.3188862800598145,
2311
+ "learning_rate": 2.4808107045107123e-05,
2312
+ "loss": 0.0098,
2313
+ "num_input_tokens_seen": 16080272,
2314
+ "step": 2880
2315
+ },
2316
+ {
2317
+ "epoch": 15.052321778940485,
2318
+ "grad_norm": 0.41307705640792847,
2319
+ "learning_rate": 2.4671046911494025e-05,
2320
+ "loss": 0.0037,
2321
+ "num_input_tokens_seen": 16136752,
2322
+ "step": 2890
2323
+ },
2324
+ {
2325
+ "epoch": 15.104643557880967,
2326
+ "grad_norm": 0.8025826811790466,
2327
+ "learning_rate": 2.453399666622072e-05,
2328
+ "loss": 0.0032,
2329
+ "num_input_tokens_seen": 16191920,
2330
+ "step": 2900
2331
+ },
2332
+ {
2333
+ "epoch": 15.156965336821452,
2334
+ "grad_norm": 0.2138717621564865,
2335
+ "learning_rate": 2.4396960429021738e-05,
2336
+ "loss": 0.0028,
2337
+ "num_input_tokens_seen": 16246912,
2338
+ "step": 2910
2339
+ },
2340
+ {
2341
+ "epoch": 15.209287115761937,
2342
+ "grad_norm": 0.16525974869728088,
2343
+ "learning_rate": 2.4259942319210498e-05,
2344
+ "loss": 0.0058,
2345
+ "num_input_tokens_seen": 16303520,
2346
+ "step": 2920
2347
+ },
2348
+ {
2349
+ "epoch": 15.26160889470242,
2350
+ "grad_norm": 0.22977286577224731,
2351
+ "learning_rate": 2.412294645555555e-05,
2352
+ "loss": 0.005,
2353
+ "num_input_tokens_seen": 16359888,
2354
+ "step": 2930
2355
+ },
2356
+ {
2357
+ "epoch": 15.313930673642904,
2358
+ "grad_norm": 1.8685427904129028,
2359
+ "learning_rate": 2.39859769561567e-05,
2360
+ "loss": 0.0048,
2361
+ "num_input_tokens_seen": 16416512,
2362
+ "step": 2940
2363
+ },
2364
+ {
2365
+ "epoch": 15.366252452583387,
2366
+ "grad_norm": 0.7653654217720032,
2367
+ "learning_rate": 2.3849037938321235e-05,
2368
+ "loss": 0.0056,
2369
+ "num_input_tokens_seen": 16473664,
2370
+ "step": 2950
2371
+ },
2372
+ {
2373
+ "epoch": 15.418574231523872,
2374
+ "grad_norm": 0.21836823225021362,
2375
+ "learning_rate": 2.3712133518440176e-05,
2376
+ "loss": 0.0072,
2377
+ "num_input_tokens_seen": 16529312,
2378
+ "step": 2960
2379
+ },
2380
+ {
2381
+ "epoch": 15.470896010464356,
2382
+ "grad_norm": 0.6065989136695862,
2383
+ "learning_rate": 2.3575267811864543e-05,
2384
+ "loss": 0.0074,
2385
+ "num_input_tokens_seen": 16586048,
2386
+ "step": 2970
2387
+ },
2388
+ {
2389
+ "epoch": 15.523217789404839,
2390
+ "grad_norm": 0.21767759323120117,
2391
+ "learning_rate": 2.34384449327816e-05,
2392
+ "loss": 0.0037,
2393
+ "num_input_tokens_seen": 16642560,
2394
+ "step": 2980
2395
+ },
2396
+ {
2397
+ "epoch": 15.575539568345324,
2398
+ "grad_norm": 0.35699793696403503,
2399
+ "learning_rate": 2.330166899409124e-05,
2400
+ "loss": 0.0039,
2401
+ "num_input_tokens_seen": 16699248,
2402
+ "step": 2990
2403
+ },
2404
+ {
2405
+ "epoch": 15.627861347285808,
2406
+ "grad_norm": 0.46648091077804565,
2407
+ "learning_rate": 2.3164944107282333e-05,
2408
+ "loss": 0.0067,
2409
+ "num_input_tokens_seen": 16755952,
2410
+ "step": 3000
2411
+ },
2412
+ {
2413
+ "epoch": 15.680183126226291,
2414
+ "grad_norm": 0.8464080691337585,
2415
+ "learning_rate": 2.3028274382309097e-05,
2416
+ "loss": 0.0061,
2417
+ "num_input_tokens_seen": 16811536,
2418
+ "step": 3010
2419
+ },
2420
+ {
2421
+ "epoch": 15.732504905166776,
2422
+ "grad_norm": 0.30818283557891846,
2423
+ "learning_rate": 2.2891663927467604e-05,
2424
+ "loss": 0.0046,
2425
+ "num_input_tokens_seen": 16867824,
2426
+ "step": 3020
2427
+ },
2428
+ {
2429
+ "epoch": 15.78482668410726,
2430
+ "grad_norm": 0.5573921799659729,
2431
+ "learning_rate": 2.2755116849272274e-05,
2432
+ "loss": 0.0041,
2433
+ "num_input_tokens_seen": 16924080,
2434
+ "step": 3030
2435
+ },
2436
+ {
2437
+ "epoch": 15.837148463047743,
2438
+ "grad_norm": 0.5058010816574097,
2439
+ "learning_rate": 2.2618637252332398e-05,
2440
+ "loss": 0.0065,
2441
+ "num_input_tokens_seen": 16979728,
2442
+ "step": 3040
2443
+ },
2444
+ {
2445
+ "epoch": 15.889470241988228,
2446
+ "grad_norm": 0.4849563241004944,
2447
+ "learning_rate": 2.2482229239228785e-05,
2448
+ "loss": 0.0047,
2449
+ "num_input_tokens_seen": 17035488,
2450
+ "step": 3050
2451
+ },
2452
+ {
2453
+ "epoch": 15.941792020928713,
2454
+ "grad_norm": 0.10410932451486588,
2455
+ "learning_rate": 2.234589691039046e-05,
2456
+ "loss": 0.0054,
2457
+ "num_input_tokens_seen": 17091072,
2458
+ "step": 3060
2459
+ },
2460
+ {
2461
+ "epoch": 15.994113799869195,
2462
+ "grad_norm": 0.31193605065345764,
2463
+ "learning_rate": 2.2209644363971337e-05,
2464
+ "loss": 0.0043,
2465
+ "num_input_tokens_seen": 17147328,
2466
+ "step": 3070
2467
+ },
2468
+ {
2469
+ "epoch": 16.041857423152386,
2470
+ "grad_norm": 0.15345972776412964,
2471
+ "learning_rate": 2.2073475695727096e-05,
2472
+ "loss": 0.0045,
2473
+ "num_input_tokens_seen": 17198200,
2474
+ "step": 3080
2475
+ },
2476
+ {
2477
+ "epoch": 16.09417920209287,
2478
+ "grad_norm": 0.8098782896995544,
2479
+ "learning_rate": 2.193739499889201e-05,
2480
+ "loss": 0.0042,
2481
+ "num_input_tokens_seen": 17254408,
2482
+ "step": 3090
2483
+ },
2484
+ {
2485
+ "epoch": 16.146500981033356,
2486
+ "grad_norm": 0.6010912656784058,
2487
+ "learning_rate": 2.1801406364055958e-05,
2488
+ "loss": 0.0049,
2489
+ "num_input_tokens_seen": 17311304,
2490
+ "step": 3100
2491
+ },
2492
+ {
2493
+ "epoch": 16.19882275997384,
2494
+ "grad_norm": 0.0812903568148613,
2495
+ "learning_rate": 2.1665513879041418e-05,
2496
+ "loss": 0.001,
2497
+ "num_input_tokens_seen": 17368152,
2498
+ "step": 3110
2499
+ },
2500
+ {
2501
+ "epoch": 16.251144538914325,
2502
+ "grad_norm": 0.08344978094100952,
2503
+ "learning_rate": 2.1529721628780593e-05,
2504
+ "loss": 0.0037,
2505
+ "num_input_tokens_seen": 17423480,
2506
+ "step": 3120
2507
+ },
2508
+ {
2509
+ "epoch": 16.303466317854806,
2510
+ "grad_norm": 0.3543494641780853,
2511
+ "learning_rate": 2.1394033695192645e-05,
2512
+ "loss": 0.0016,
2513
+ "num_input_tokens_seen": 17478984,
2514
+ "step": 3130
2515
+ },
2516
+ {
2517
+ "epoch": 16.35578809679529,
2518
+ "grad_norm": 0.683672308921814,
2519
+ "learning_rate": 2.125845415706097e-05,
2520
+ "loss": 0.0019,
2521
+ "num_input_tokens_seen": 17535592,
2522
+ "step": 3140
2523
+ },
2524
+ {
2525
+ "epoch": 16.408109875735775,
2526
+ "grad_norm": 0.07661443203687668,
2527
+ "learning_rate": 2.1122987089910577e-05,
2528
+ "loss": 0.0012,
2529
+ "num_input_tokens_seen": 17591960,
2530
+ "step": 3150
2531
+ },
2532
+ {
2533
+ "epoch": 16.46043165467626,
2534
+ "grad_norm": 0.21092914044857025,
2535
+ "learning_rate": 2.0987636565885606e-05,
2536
+ "loss": 0.004,
2537
+ "num_input_tokens_seen": 17648504,
2538
+ "step": 3160
2539
+ },
2540
+ {
2541
+ "epoch": 16.512753433616744,
2542
+ "grad_norm": 0.8990269303321838,
2543
+ "learning_rate": 2.0852406653626916e-05,
2544
+ "loss": 0.003,
2545
+ "num_input_tokens_seen": 17705240,
2546
+ "step": 3170
2547
+ },
2548
+ {
2549
+ "epoch": 16.565075212557225,
2550
+ "grad_norm": 0.22914479672908783,
2551
+ "learning_rate": 2.0717301418149742e-05,
2552
+ "loss": 0.0028,
2553
+ "num_input_tokens_seen": 17760392,
2554
+ "step": 3180
2555
+ },
2556
+ {
2557
+ "epoch": 16.61739699149771,
2558
+ "grad_norm": 1.7169655561447144,
2559
+ "learning_rate": 2.058232492072157e-05,
2560
+ "loss": 0.0033,
2561
+ "num_input_tokens_seen": 17816744,
2562
+ "step": 3190
2563
+ },
2564
+ {
2565
+ "epoch": 16.669718770438195,
2566
+ "grad_norm": 0.07557539641857147,
2567
+ "learning_rate": 2.044748121874e-05,
2568
+ "loss": 0.0021,
2569
+ "num_input_tokens_seen": 17872760,
2570
+ "step": 3200
2571
+ },
2572
+ {
2573
+ "epoch": 16.72204054937868,
2574
+ "grad_norm": 0.1961260586977005,
2575
+ "learning_rate": 2.0312774365610783e-05,
2576
+ "loss": 0.0017,
2577
+ "num_input_tokens_seen": 17928696,
2578
+ "step": 3210
2579
+ },
2580
+ {
2581
+ "epoch": 16.774362328319164,
2582
+ "grad_norm": 0.6838825941085815,
2583
+ "learning_rate": 2.0178208410626006e-05,
2584
+ "loss": 0.0011,
2585
+ "num_input_tokens_seen": 17984232,
2586
+ "step": 3220
2587
+ },
2588
+ {
2589
+ "epoch": 16.82668410725965,
2590
+ "grad_norm": 0.12144844979047775,
2591
+ "learning_rate": 2.0043787398842347e-05,
2592
+ "loss": 0.0022,
2593
+ "num_input_tokens_seen": 18040712,
2594
+ "step": 3230
2595
+ },
2596
+ {
2597
+ "epoch": 16.87900588620013,
2598
+ "grad_norm": 0.3987827003002167,
2599
+ "learning_rate": 1.9909515370959493e-05,
2600
+ "loss": 0.0029,
2601
+ "num_input_tokens_seen": 18097016,
2602
+ "step": 3240
2603
+ },
2604
+ {
2605
+ "epoch": 16.931327665140614,
2606
+ "grad_norm": 0.10082973539829254,
2607
+ "learning_rate": 1.9775396363198654e-05,
2608
+ "loss": 0.0015,
2609
+ "num_input_tokens_seen": 18152776,
2610
+ "step": 3250
2611
+ },
2612
+ {
2613
+ "epoch": 16.9836494440811,
2614
+ "grad_norm": 0.020283468067646027,
2615
+ "learning_rate": 1.9641434407181285e-05,
2616
+ "loss": 0.002,
2617
+ "num_input_tokens_seen": 18208456,
2618
+ "step": 3260
2619
+ },
2620
+ {
2621
+ "epoch": 17.03139306736429,
2622
+ "grad_norm": 0.23080122470855713,
2623
+ "learning_rate": 1.950763352980782e-05,
2624
+ "loss": 0.0006,
2625
+ "num_input_tokens_seen": 18259784,
2626
+ "step": 3270
2627
+ },
2628
+ {
2629
+ "epoch": 17.083714846304776,
2630
+ "grad_norm": 0.00946098379790783,
2631
+ "learning_rate": 1.9373997753136695e-05,
2632
+ "loss": 0.0003,
2633
+ "num_input_tokens_seen": 18316008,
2634
+ "step": 3280
2635
+ },
2636
+ {
2637
+ "epoch": 17.136036625245257,
2638
+ "grad_norm": 0.033656761050224304,
2639
+ "learning_rate": 1.9240531094263388e-05,
2640
+ "loss": 0.0008,
2641
+ "num_input_tokens_seen": 18372696,
2642
+ "step": 3290
2643
+ },
2644
+ {
2645
+ "epoch": 17.188358404185742,
2646
+ "grad_norm": 0.03647719696164131,
2647
+ "learning_rate": 1.9107237565199716e-05,
2648
+ "loss": 0.0009,
2649
+ "num_input_tokens_seen": 18428488,
2650
+ "step": 3300
2651
+ },
2652
+ {
2653
+ "epoch": 17.240680183126226,
2654
+ "grad_norm": 0.030296266078948975,
2655
+ "learning_rate": 1.8974121172753192e-05,
2656
+ "loss": 0.0004,
2657
+ "num_input_tokens_seen": 18484120,
2658
+ "step": 3310
2659
+ },
2660
+ {
2661
+ "epoch": 17.29300196206671,
2662
+ "grad_norm": 0.043923936784267426,
2663
+ "learning_rate": 1.8841185918406594e-05,
2664
+ "loss": 0.0004,
2665
+ "num_input_tokens_seen": 18539976,
2666
+ "step": 3320
2667
+ },
2668
+ {
2669
+ "epoch": 17.345323741007196,
2670
+ "grad_norm": 0.038960348814725876,
2671
+ "learning_rate": 1.870843579819771e-05,
2672
+ "loss": 0.0008,
2673
+ "num_input_tokens_seen": 18596792,
2674
+ "step": 3330
2675
+ },
2676
+ {
2677
+ "epoch": 17.397645519947677,
2678
+ "grad_norm": 0.010129265487194061,
2679
+ "learning_rate": 1.8575874802599162e-05,
2680
+ "loss": 0.001,
2681
+ "num_input_tokens_seen": 18652776,
2682
+ "step": 3340
2683
+ },
2684
+ {
2685
+ "epoch": 17.44996729888816,
2686
+ "grad_norm": 0.04958868771791458,
2687
+ "learning_rate": 1.8443506916398485e-05,
2688
+ "loss": 0.0004,
2689
+ "num_input_tokens_seen": 18709320,
2690
+ "step": 3350
2691
+ },
2692
+ {
2693
+ "epoch": 17.502289077828646,
2694
+ "grad_norm": 2.036484956741333,
2695
+ "learning_rate": 1.8311336118578355e-05,
2696
+ "loss": 0.0018,
2697
+ "num_input_tokens_seen": 18766376,
2698
+ "step": 3360
2699
+ },
2700
+ {
2701
+ "epoch": 17.55461085676913,
2702
+ "grad_norm": 0.10162217170000076,
2703
+ "learning_rate": 1.8179366382196944e-05,
2704
+ "loss": 0.0015,
2705
+ "num_input_tokens_seen": 18822440,
2706
+ "step": 3370
2707
+ },
2708
+ {
2709
+ "epoch": 17.606932635709615,
2710
+ "grad_norm": 0.41068536043167114,
2711
+ "learning_rate": 1.8047601674268522e-05,
2712
+ "loss": 0.0011,
2713
+ "num_input_tokens_seen": 18877976,
2714
+ "step": 3380
2715
+ },
2716
+ {
2717
+ "epoch": 17.6592544146501,
2718
+ "grad_norm": 0.35680681467056274,
2719
+ "learning_rate": 1.7916045955644207e-05,
2720
+ "loss": 0.0015,
2721
+ "num_input_tokens_seen": 18934728,
2722
+ "step": 3390
2723
+ },
2724
+ {
2725
+ "epoch": 17.71157619359058,
2726
+ "grad_norm": 0.16506262123584747,
2727
+ "learning_rate": 1.7784703180892882e-05,
2728
+ "loss": 0.0004,
2729
+ "num_input_tokens_seen": 18990088,
2730
+ "step": 3400
2731
+ },
2732
+ {
2733
+ "epoch": 17.763897972531066,
2734
+ "grad_norm": 0.05834071710705757,
2735
+ "learning_rate": 1.7653577298182327e-05,
2736
+ "loss": 0.0004,
2737
+ "num_input_tokens_seen": 19046728,
2738
+ "step": 3410
2739
+ },
2740
+ {
2741
+ "epoch": 17.81621975147155,
2742
+ "grad_norm": 0.14426672458648682,
2743
+ "learning_rate": 1.752267224916055e-05,
2744
+ "loss": 0.0022,
2745
+ "num_input_tokens_seen": 19101672,
2746
+ "step": 3420
2747
+ },
2748
+ {
2749
+ "epoch": 17.868541530412035,
2750
+ "grad_norm": 0.2213674634695053,
2751
+ "learning_rate": 1.7391991968837272e-05,
2752
+ "loss": 0.0007,
2753
+ "num_input_tokens_seen": 19159128,
2754
+ "step": 3430
2755
+ },
2756
+ {
2757
+ "epoch": 17.92086330935252,
2758
+ "grad_norm": 0.6867311000823975,
2759
+ "learning_rate": 1.726154038546569e-05,
2760
+ "loss": 0.0005,
2761
+ "num_input_tokens_seen": 19215448,
2762
+ "step": 3440
2763
+ },
2764
+ {
2765
+ "epoch": 17.973185088293,
2766
+ "grad_norm": 0.022834990173578262,
2767
+ "learning_rate": 1.713132142042434e-05,
2768
+ "loss": 0.0013,
2769
+ "num_input_tokens_seen": 19270328,
2770
+ "step": 3450
2771
+ },
2772
+ {
2773
+ "epoch": 18.020928711576193,
2774
+ "grad_norm": 0.017958860844373703,
2775
+ "learning_rate": 1.7001338988099264e-05,
2776
+ "loss": 0.0004,
2777
+ "num_input_tokens_seen": 19321096,
2778
+ "step": 3460
2779
+ },
2780
+ {
2781
+ "epoch": 18.073250490516678,
2782
+ "grad_norm": 0.013346249237656593,
2783
+ "learning_rate": 1.68715969957663e-05,
2784
+ "loss": 0.0001,
2785
+ "num_input_tokens_seen": 19377144,
2786
+ "step": 3470
2787
+ },
2788
+ {
2789
+ "epoch": 18.125572269457162,
2790
+ "grad_norm": 0.016560234129428864,
2791
+ "learning_rate": 1.6742099343473674e-05,
2792
+ "loss": 0.0009,
2793
+ "num_input_tokens_seen": 19433080,
2794
+ "step": 3480
2795
+ },
2796
+ {
2797
+ "epoch": 18.177894048397647,
2798
+ "grad_norm": 0.3799145519733429,
2799
+ "learning_rate": 1.6612849923924723e-05,
2800
+ "loss": 0.0003,
2801
+ "num_input_tokens_seen": 19489176,
2802
+ "step": 3490
2803
+ },
2804
+ {
2805
+ "epoch": 18.230215827338128,
2806
+ "grad_norm": 0.9833922982215881,
2807
+ "learning_rate": 1.6483852622360923e-05,
2808
+ "loss": 0.0003,
2809
+ "num_input_tokens_seen": 19544920,
2810
+ "step": 3500
2811
+ },
2812
+ {
2813
+ "epoch": 18.282537606278613,
2814
+ "grad_norm": 0.017415842041373253,
2815
+ "learning_rate": 1.635511131644505e-05,
2816
+ "loss": 0.0008,
2817
+ "num_input_tokens_seen": 19600888,
2818
+ "step": 3510
2819
+ },
2820
+ {
2821
+ "epoch": 18.334859385219097,
2822
+ "grad_norm": 0.05391722172498703,
2823
+ "learning_rate": 1.6226629876144657e-05,
2824
+ "loss": 0.0002,
2825
+ "num_input_tokens_seen": 19656168,
2826
+ "step": 3520
2827
+ },
2828
+ {
2829
+ "epoch": 18.387181164159582,
2830
+ "grad_norm": 0.32104507088661194,
2831
+ "learning_rate": 1.609841216361574e-05,
2832
+ "loss": 0.0005,
2833
+ "num_input_tokens_seen": 19711224,
2834
+ "step": 3530
2835
+ },
2836
+ {
2837
+ "epoch": 18.439502943100067,
2838
+ "grad_norm": 0.019494058564305305,
2839
+ "learning_rate": 1.597046203308662e-05,
2840
+ "loss": 0.0004,
2841
+ "num_input_tokens_seen": 19767768,
2842
+ "step": 3540
2843
+ },
2844
+ {
2845
+ "epoch": 18.491824722040548,
2846
+ "grad_norm": 0.03266040235757828,
2847
+ "learning_rate": 1.584278333074208e-05,
2848
+ "loss": 0.0001,
2849
+ "num_input_tokens_seen": 19824616,
2850
+ "step": 3550
2851
+ },
2852
+ {
2853
+ "epoch": 18.544146500981032,
2854
+ "grad_norm": 0.024637416005134583,
2855
+ "learning_rate": 1.571537989460779e-05,
2856
+ "loss": 0.0007,
2857
+ "num_input_tokens_seen": 19880024,
2858
+ "step": 3560
2859
+ },
2860
+ {
2861
+ "epoch": 18.596468279921517,
2862
+ "grad_norm": 0.03031347133219242,
2863
+ "learning_rate": 1.5588255554434883e-05,
2864
+ "loss": 0.0003,
2865
+ "num_input_tokens_seen": 19936504,
2866
+ "step": 3570
2867
+ },
2868
+ {
2869
+ "epoch": 18.648790058862,
2870
+ "grad_norm": 0.005317925941199064,
2871
+ "learning_rate": 1.5461414131584873e-05,
2872
+ "loss": 0.0006,
2873
+ "num_input_tokens_seen": 19992136,
2874
+ "step": 3580
2875
+ },
2876
+ {
2877
+ "epoch": 18.701111837802486,
2878
+ "grad_norm": 0.024280209094285965,
2879
+ "learning_rate": 1.533485943891478e-05,
2880
+ "loss": 0.0035,
2881
+ "num_input_tokens_seen": 20049128,
2882
+ "step": 3590
2883
+ },
2884
+ {
2885
+ "epoch": 18.75343361674297,
2886
+ "grad_norm": 0.9302756190299988,
2887
+ "learning_rate": 1.5208595280662497e-05,
2888
+ "loss": 0.0002,
2889
+ "num_input_tokens_seen": 20106488,
2890
+ "step": 3600
2891
+ },
2892
+ {
2893
+ "epoch": 18.805755395683452,
2894
+ "grad_norm": 0.013046924024820328,
2895
+ "learning_rate": 1.5082625452332433e-05,
2896
+ "loss": 0.0003,
2897
+ "num_input_tokens_seen": 20162536,
2898
+ "step": 3610
2899
+ },
2900
+ {
2901
+ "epoch": 18.858077174623936,
2902
+ "grad_norm": 0.01642036624252796,
2903
+ "learning_rate": 1.4956953740581454e-05,
2904
+ "loss": 0.0003,
2905
+ "num_input_tokens_seen": 20219032,
2906
+ "step": 3620
2907
+ },
2908
+ {
2909
+ "epoch": 18.91039895356442,
2910
+ "grad_norm": 0.01480098720639944,
2911
+ "learning_rate": 1.4831583923104999e-05,
2912
+ "loss": 0.0003,
2913
+ "num_input_tokens_seen": 20275880,
2914
+ "step": 3630
2915
+ },
2916
+ {
2917
+ "epoch": 18.962720732504906,
2918
+ "grad_norm": 0.05232972651720047,
2919
+ "learning_rate": 1.4706519768523597e-05,
2920
+ "loss": 0.0006,
2921
+ "num_input_tokens_seen": 20332264,
2922
+ "step": 3640
2923
+ },
2924
+ {
2925
+ "epoch": 19.0104643557881,
2926
+ "grad_norm": 0.008723029866814613,
2927
+ "learning_rate": 1.458176503626949e-05,
2928
+ "loss": 0.0001,
2929
+ "num_input_tokens_seen": 20382464,
2930
+ "step": 3650
2931
+ },
2932
+ {
2933
+ "epoch": 19.06278613472858,
2934
+ "grad_norm": 0.35439035296440125,
2935
+ "learning_rate": 1.4457323476473738e-05,
2936
+ "loss": 0.0005,
2937
+ "num_input_tokens_seen": 20438720,
2938
+ "step": 3660
2939
+ },
2940
+ {
2941
+ "epoch": 19.115107913669064,
2942
+ "grad_norm": 0.02343558706343174,
2943
+ "learning_rate": 1.4333198829853394e-05,
2944
+ "loss": 0.0001,
2945
+ "num_input_tokens_seen": 20493616,
2946
+ "step": 3670
2947
+ },
2948
+ {
2949
+ "epoch": 19.16742969260955,
2950
+ "grad_norm": 0.014349430799484253,
2951
+ "learning_rate": 1.420939482759907e-05,
2952
+ "loss": 0.0003,
2953
+ "num_input_tokens_seen": 20550000,
2954
+ "step": 3680
2955
+ },
2956
+ {
2957
+ "epoch": 19.219751471550033,
2958
+ "grad_norm": 0.044981323182582855,
2959
+ "learning_rate": 1.4085915191262832e-05,
2960
+ "loss": 0.0001,
2961
+ "num_input_tokens_seen": 20606144,
2962
+ "step": 3690
2963
+ },
2964
+ {
2965
+ "epoch": 19.272073250490518,
2966
+ "grad_norm": 0.023957155644893646,
2967
+ "learning_rate": 1.396276363264629e-05,
2968
+ "loss": 0.0001,
2969
+ "num_input_tokens_seen": 20662720,
2970
+ "step": 3700
2971
+ },
2972
+ {
2973
+ "epoch": 19.324395029431,
2974
+ "grad_norm": 0.015040691941976547,
2975
+ "learning_rate": 1.3839943853689024e-05,
2976
+ "loss": 0.0002,
2977
+ "num_input_tokens_seen": 20718992,
2978
+ "step": 3710
2979
+ },
2980
+ {
2981
+ "epoch": 19.376716808371484,
2982
+ "grad_norm": 0.011137389577925205,
2983
+ "learning_rate": 1.3717459546357284e-05,
2984
+ "loss": 0.0001,
2985
+ "num_input_tokens_seen": 20776096,
2986
+ "step": 3720
2987
+ },
2988
+ {
2989
+ "epoch": 19.429038587311968,
2990
+ "grad_norm": 0.059972431510686874,
2991
+ "learning_rate": 1.3595314392533083e-05,
2992
+ "loss": 0.0003,
2993
+ "num_input_tokens_seen": 20831584,
2994
+ "step": 3730
2995
+ },
2996
+ {
2997
+ "epoch": 19.481360366252453,
2998
+ "grad_norm": 0.003549647517502308,
2999
+ "learning_rate": 1.3473512063903432e-05,
3000
+ "loss": 0.0006,
3001
+ "num_input_tokens_seen": 20887408,
3002
+ "step": 3740
3003
+ },
3004
+ {
3005
+ "epoch": 19.533682145192937,
3006
+ "grad_norm": 0.01864522323012352,
3007
+ "learning_rate": 1.335205622185003e-05,
3008
+ "loss": 0.0003,
3009
+ "num_input_tokens_seen": 20944080,
3010
+ "step": 3750
3011
+ },
3012
+ {
3013
+ "epoch": 19.586003924133422,
3014
+ "grad_norm": 0.013788777403533459,
3015
+ "learning_rate": 1.3230950517339141e-05,
3016
+ "loss": 0.0001,
3017
+ "num_input_tokens_seen": 21000576,
3018
+ "step": 3760
3019
+ },
3020
+ {
3021
+ "epoch": 19.638325703073903,
3022
+ "grad_norm": 0.007708332501351833,
3023
+ "learning_rate": 1.3110198590811918e-05,
3024
+ "loss": 0.0001,
3025
+ "num_input_tokens_seen": 21056608,
3026
+ "step": 3770
3027
+ },
3028
+ {
3029
+ "epoch": 19.690647482014388,
3030
+ "grad_norm": 0.008117050863802433,
3031
+ "learning_rate": 1.2989804072074918e-05,
3032
+ "loss": 0.0001,
3033
+ "num_input_tokens_seen": 21112528,
3034
+ "step": 3780
3035
+ },
3036
+ {
3037
+ "epoch": 19.742969260954872,
3038
+ "grad_norm": 0.005804878659546375,
3039
+ "learning_rate": 1.2869770580191051e-05,
3040
+ "loss": 0.0001,
3041
+ "num_input_tokens_seen": 21169104,
3042
+ "step": 3790
3043
+ },
3044
+ {
3045
+ "epoch": 19.795291039895357,
3046
+ "grad_norm": 0.021145416423678398,
3047
+ "learning_rate": 1.2750101723370683e-05,
3048
+ "loss": 0.0021,
3049
+ "num_input_tokens_seen": 21225440,
3050
+ "step": 3800
3051
+ },
3052
+ {
3053
+ "epoch": 19.84761281883584,
3054
+ "grad_norm": 0.031260546296834946,
3055
+ "learning_rate": 1.2630801098863284e-05,
3056
+ "loss": 0.0014,
3057
+ "num_input_tokens_seen": 21281952,
3058
+ "step": 3810
3059
+ },
3060
+ {
3061
+ "epoch": 19.899934597776323,
3062
+ "grad_norm": 0.024749331176280975,
3063
+ "learning_rate": 1.2511872292849236e-05,
3064
+ "loss": 0.0001,
3065
+ "num_input_tokens_seen": 21338448,
3066
+ "step": 3820
3067
+ },
3068
+ {
3069
+ "epoch": 19.952256376716807,
3070
+ "grad_norm": 0.11673085391521454,
3071
+ "learning_rate": 1.2393318880332062e-05,
3072
+ "loss": 0.0001,
3073
+ "num_input_tokens_seen": 21394640,
3074
+ "step": 3830
3075
+ },
3076
+ {
3077
+ "epoch": 20.0,
3078
+ "grad_norm": 0.07176396250724792,
3079
+ "learning_rate": 1.2275144425030902e-05,
3080
+ "loss": 0.0003,
3081
+ "num_input_tokens_seen": 21445504,
3082
+ "step": 3840
3083
+ },
3084
+ {
3085
+ "epoch": 20.052321778940485,
3086
+ "grad_norm": 0.024757077917456627,
3087
+ "learning_rate": 1.2157352479273465e-05,
3088
+ "loss": 0.0004,
3089
+ "num_input_tokens_seen": 21503072,
3090
+ "step": 3850
3091
+ },
3092
+ {
3093
+ "epoch": 20.10464355788097,
3094
+ "grad_norm": 0.016589034348726273,
3095
+ "learning_rate": 1.2039946583889225e-05,
3096
+ "loss": 0.0001,
3097
+ "num_input_tokens_seen": 21559312,
3098
+ "step": 3860
3099
+ },
3100
+ {
3101
+ "epoch": 20.15696533682145,
3102
+ "grad_norm": 0.004230449441820383,
3103
+ "learning_rate": 1.1922930268102949e-05,
3104
+ "loss": 0.0001,
3105
+ "num_input_tokens_seen": 21616032,
3106
+ "step": 3870
3107
+ },
3108
+ {
3109
+ "epoch": 20.209287115761935,
3110
+ "grad_norm": 0.007515770383179188,
3111
+ "learning_rate": 1.1806307049428616e-05,
3112
+ "loss": 0.0003,
3113
+ "num_input_tokens_seen": 21671872,
3114
+ "step": 3880
3115
+ },
3116
+ {
3117
+ "epoch": 20.26160889470242,
3118
+ "grad_norm": 0.0041547054424881935,
3119
+ "learning_rate": 1.1690080433563716e-05,
3120
+ "loss": 0.0001,
3121
+ "num_input_tokens_seen": 21727616,
3122
+ "step": 3890
3123
+ },
3124
+ {
3125
+ "epoch": 20.313930673642904,
3126
+ "grad_norm": 0.0042037139646708965,
3127
+ "learning_rate": 1.157425391428384e-05,
3128
+ "loss": 0.0,
3129
+ "num_input_tokens_seen": 21784400,
3130
+ "step": 3900
3131
+ },
3132
+ {
3133
+ "epoch": 20.36625245258339,
3134
+ "grad_norm": 0.024101046845316887,
3135
+ "learning_rate": 1.145883097333767e-05,
3136
+ "loss": 0.0001,
3137
+ "num_input_tokens_seen": 21841584,
3138
+ "step": 3910
3139
+ },
3140
+ {
3141
+ "epoch": 20.418574231523873,
3142
+ "grad_norm": 0.01927500218153,
3143
+ "learning_rate": 1.1343815080342279e-05,
3144
+ "loss": 0.0,
3145
+ "num_input_tokens_seen": 21897120,
3146
+ "step": 3920
3147
+ },
3148
+ {
3149
+ "epoch": 20.470896010464354,
3150
+ "grad_norm": 0.005595839582383633,
3151
+ "learning_rate": 1.1229209692678921e-05,
3152
+ "loss": 0.0001,
3153
+ "num_input_tokens_seen": 21952320,
3154
+ "step": 3930
3155
+ },
3156
+ {
3157
+ "epoch": 20.52321778940484,
3158
+ "grad_norm": 0.00780284171923995,
3159
+ "learning_rate": 1.1115018255389006e-05,
3160
+ "loss": 0.0001,
3161
+ "num_input_tokens_seen": 22008432,
3162
+ "step": 3940
3163
+ },
3164
+ {
3165
+ "epoch": 20.575539568345324,
3166
+ "grad_norm": 0.0051256874576210976,
3167
+ "learning_rate": 1.1001244201070606e-05,
3168
+ "loss": 0.0001,
3169
+ "num_input_tokens_seen": 22063664,
3170
+ "step": 3950
3171
+ },
3172
+ {
3173
+ "epoch": 20.62786134728581,
3174
+ "grad_norm": 0.0060654510743916035,
3175
+ "learning_rate": 1.088789094977522e-05,
3176
+ "loss": 0.0,
3177
+ "num_input_tokens_seen": 22119488,
3178
+ "step": 3960
3179
+ },
3180
+ {
3181
+ "epoch": 20.680183126226293,
3182
+ "grad_norm": 0.019679056480526924,
3183
+ "learning_rate": 1.077496190890502e-05,
3184
+ "loss": 0.0,
3185
+ "num_input_tokens_seen": 22175568,
3186
+ "step": 3970
3187
+ },
3188
+ {
3189
+ "epoch": 20.732504905166774,
3190
+ "grad_norm": 0.02322162687778473,
3191
+ "learning_rate": 1.0662460473110384e-05,
3192
+ "loss": 0.0,
3193
+ "num_input_tokens_seen": 22231472,
3194
+ "step": 3980
3195
+ },
3196
+ {
3197
+ "epoch": 20.78482668410726,
3198
+ "grad_norm": 0.001980294706299901,
3199
+ "learning_rate": 1.0550390024187906e-05,
3200
+ "loss": 0.0007,
3201
+ "num_input_tokens_seen": 22287120,
3202
+ "step": 3990
3203
+ },
3204
+ {
3205
+ "epoch": 20.837148463047743,
3206
+ "grad_norm": 0.3102468252182007,
3207
+ "learning_rate": 1.0438753930978643e-05,
3208
+ "loss": 0.0001,
3209
+ "num_input_tokens_seen": 22342736,
3210
+ "step": 4000
3211
+ }
3212
+ ],
3213
+ "logging_steps": 10,
3214
+ "max_steps": 5730,
3215
+ "num_input_tokens_seen": 22342736,
3216
+ "num_train_epochs": 30,
3217
+ "save_steps": 100,
3218
+ "stateful_callbacks": {
3219
+ "TrainerControl": {
3220
+ "args": {
3221
+ "should_epoch_stop": false,
3222
+ "should_evaluate": false,
3223
+ "should_log": false,
3224
+ "should_save": true,
3225
+ "should_training_stop": false
3226
+ },
3227
+ "attributes": {}
3228
+ }
3229
+ },
3230
+ "total_flos": 9.588523420509798e+17,
3231
+ "train_batch_size": 2,
3232
+ "trial_name": null,
3233
+ "trial_params": null
3234
+ }
BHC_Test1/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70a1a9572fede9132d2f690b06d50636ee37349818605d61d0d34856cabc787c
3
+ size 7544
BHC_Test1/zero_to_fp32.py ADDED
@@ -0,0 +1,760 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example:
14
+ # python zero_to_fp32.py . output_dir/
15
+ # or
16
+ # python zero_to_fp32.py . output_dir/ --safe_serialization
17
+
18
+ import argparse
19
+ import torch
20
+ import glob
21
+ import math
22
+ import os
23
+ import re
24
+ import gc
25
+ import json
26
+ import numpy as np
27
+ from tqdm import tqdm
28
+ from collections import OrderedDict
29
+ from dataclasses import dataclass
30
+
31
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
32
+ # DeepSpeed data structures it has to be available in the current python environment.
33
+ from deepspeed.utils import logger
34
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
35
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
36
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
37
+
38
+
39
+ @dataclass
40
+ class zero_model_state:
41
+ buffers: dict()
42
+ param_shapes: dict()
43
+ shared_params: list
44
+ ds_version: int
45
+ frozen_param_shapes: dict()
46
+ frozen_param_fragments: dict()
47
+
48
+
49
+ debug = 0
50
+
51
+ # load to cpu
52
+ device = torch.device('cpu')
53
+
54
+
55
+ def atoi(text):
56
+ return int(text) if text.isdigit() else text
57
+
58
+
59
+ def natural_keys(text):
60
+ '''
61
+ alist.sort(key=natural_keys) sorts in human order
62
+ http://nedbatchelder.com/blog/200712/human_sorting.html
63
+ (See Toothy's implementation in the comments)
64
+ '''
65
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
66
+
67
+
68
+ def get_model_state_file(checkpoint_dir, zero_stage):
69
+ if not os.path.isdir(checkpoint_dir):
70
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
71
+
72
+ # there should be only one file
73
+ if zero_stage <= 2:
74
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
75
+ elif zero_stage == 3:
76
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
77
+
78
+ if not os.path.exists(file):
79
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
80
+
81
+ return file
82
+
83
+
84
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
85
+ # XXX: need to test that this simple glob rule works for multi-node setup too
86
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
87
+
88
+ if len(ckpt_files) == 0:
89
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
90
+
91
+ return ckpt_files
92
+
93
+
94
+ def get_optim_files(checkpoint_dir):
95
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
96
+
97
+
98
+ def get_model_state_files(checkpoint_dir):
99
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
100
+
101
+
102
+ def parse_model_states(files):
103
+ zero_model_states = []
104
+ for file in files:
105
+ state_dict = torch.load(file, map_location=device, weights_only=False)
106
+
107
+ if BUFFER_NAMES not in state_dict:
108
+ raise ValueError(f"{file} is not a model state checkpoint")
109
+ buffer_names = state_dict[BUFFER_NAMES]
110
+ if debug:
111
+ print("Found buffers:", buffer_names)
112
+
113
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
114
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
115
+ param_shapes = state_dict[PARAM_SHAPES]
116
+
117
+ # collect parameters that are included in param_shapes
118
+ param_names = []
119
+ for s in param_shapes:
120
+ for name in s.keys():
121
+ param_names.append(name)
122
+
123
+ # update with frozen parameters
124
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
125
+ if frozen_param_shapes is not None:
126
+ if debug:
127
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
128
+ param_names += list(frozen_param_shapes.keys())
129
+
130
+ # handle shared params
131
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
132
+
133
+ ds_version = state_dict.get(DS_VERSION, None)
134
+
135
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
136
+
137
+ z_model_state = zero_model_state(buffers=buffers,
138
+ param_shapes=param_shapes,
139
+ shared_params=shared_params,
140
+ ds_version=ds_version,
141
+ frozen_param_shapes=frozen_param_shapes,
142
+ frozen_param_fragments=frozen_param_fragments)
143
+ zero_model_states.append(z_model_state)
144
+
145
+ return zero_model_states
146
+
147
+
148
+ def parse_optim_states(files, ds_checkpoint_dir):
149
+ total_files = len(files)
150
+ state_dicts = []
151
+ for f in tqdm(files, desc='Loading checkpoint shards'):
152
+ state_dict = torch.load(f, map_location=device, mmap=True, weights_only=False)
153
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
154
+ # and also handle the case where it was already removed by another helper script
155
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
156
+ state_dicts.append(state_dict)
157
+
158
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
159
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
160
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
161
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
162
+
163
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
164
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
165
+ # use the max of the partition_count to get the dp world_size.
166
+
167
+ if type(world_size) is list:
168
+ world_size = max(world_size)
169
+
170
+ if world_size != total_files:
171
+ raise ValueError(
172
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
173
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
174
+ )
175
+
176
+ # the groups are named differently in each stage
177
+ if zero_stage <= 2:
178
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
179
+ elif zero_stage == 3:
180
+ fp32_groups_key = FP32_FLAT_GROUPS
181
+ else:
182
+ raise ValueError(f"unknown zero stage {zero_stage}")
183
+
184
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
185
+ return zero_stage, world_size, fp32_flat_groups
186
+
187
+
188
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
189
+ """
190
+ Returns fp32 state_dict reconstructed from ds checkpoint
191
+
192
+ Args:
193
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
194
+
195
+ """
196
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
197
+
198
+ optim_files = get_optim_files(ds_checkpoint_dir)
199
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
200
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
201
+
202
+ model_files = get_model_state_files(ds_checkpoint_dir)
203
+
204
+ zero_model_states = parse_model_states(model_files)
205
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
206
+
207
+ if zero_stage <= 2:
208
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
209
+ exclude_frozen_parameters)
210
+ elif zero_stage == 3:
211
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
212
+ exclude_frozen_parameters)
213
+
214
+
215
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
216
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
217
+ return
218
+
219
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
220
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
221
+
222
+ if debug:
223
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
224
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
225
+
226
+ wanted_params = len(frozen_param_shapes)
227
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
228
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
229
+ print(f'Frozen params: Have {avail_numel} numels to process.')
230
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
231
+
232
+ total_params = 0
233
+ total_numel = 0
234
+ for name, shape in frozen_param_shapes.items():
235
+ total_params += 1
236
+ unpartitioned_numel = shape.numel()
237
+ total_numel += unpartitioned_numel
238
+
239
+ state_dict[name] = frozen_param_fragments[name]
240
+
241
+ if debug:
242
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
243
+
244
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
245
+
246
+
247
+ def _has_callable(obj, fn):
248
+ attr = getattr(obj, fn, None)
249
+ return callable(attr)
250
+
251
+
252
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
253
+ param_shapes = zero_model_states[0].param_shapes
254
+
255
+ # Reconstruction protocol:
256
+ #
257
+ # XXX: document this
258
+
259
+ if debug:
260
+ for i in range(world_size):
261
+ for j in range(len(fp32_flat_groups[0])):
262
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
263
+
264
+ # XXX: memory usage doubles here (zero2)
265
+ num_param_groups = len(fp32_flat_groups[0])
266
+ merged_single_partition_of_fp32_groups = []
267
+ for i in range(num_param_groups):
268
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
269
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
270
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
271
+ avail_numel = sum(
272
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
273
+
274
+ if debug:
275
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
276
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
277
+ # not asserting if there is a mismatch due to possible padding
278
+ print(f"Have {avail_numel} numels to process.")
279
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
280
+
281
+ # params
282
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
283
+ # out-of-core computing solution
284
+ total_numel = 0
285
+ total_params = 0
286
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
287
+ offset = 0
288
+ avail_numel = full_single_fp32_vector.numel()
289
+ for name, shape in shapes.items():
290
+
291
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
292
+ total_numel += unpartitioned_numel
293
+ total_params += 1
294
+
295
+ if debug:
296
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
297
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
298
+ offset += unpartitioned_numel
299
+
300
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
301
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
302
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
303
+ # live optimizer object, so we are checking that the numbers are within the right range
304
+ align_to = 2 * world_size
305
+
306
+ def zero2_align(x):
307
+ return align_to * math.ceil(x / align_to)
308
+
309
+ if debug:
310
+ print(f"original offset={offset}, avail_numel={avail_numel}")
311
+
312
+ offset = zero2_align(offset)
313
+ avail_numel = zero2_align(avail_numel)
314
+
315
+ if debug:
316
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
317
+
318
+ # Sanity check
319
+ if offset != avail_numel:
320
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
321
+
322
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
323
+
324
+
325
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
326
+ exclude_frozen_parameters):
327
+ state_dict = OrderedDict()
328
+
329
+ # buffers
330
+ buffers = zero_model_states[0].buffers
331
+ state_dict.update(buffers)
332
+ if debug:
333
+ print(f"added {len(buffers)} buffers")
334
+
335
+ if not exclude_frozen_parameters:
336
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
337
+
338
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
339
+
340
+ # recover shared parameters
341
+ for pair in zero_model_states[0].shared_params:
342
+ if pair[1] in state_dict:
343
+ state_dict[pair[0]] = state_dict[pair[1]]
344
+
345
+ return state_dict
346
+
347
+
348
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
349
+ remainder = unpartitioned_numel % world_size
350
+ padding_numel = (world_size - remainder) if remainder else 0
351
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
352
+ return partitioned_numel, padding_numel
353
+
354
+
355
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
356
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
357
+ return
358
+
359
+ if debug:
360
+ for i in range(world_size):
361
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
362
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
363
+
364
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
365
+ wanted_params = len(frozen_param_shapes)
366
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
367
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
368
+ print(f'Frozen params: Have {avail_numel} numels to process.')
369
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
370
+
371
+ total_params = 0
372
+ total_numel = 0
373
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
374
+ total_params += 1
375
+ unpartitioned_numel = shape.numel()
376
+ total_numel += unpartitioned_numel
377
+
378
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
379
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
380
+
381
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
382
+
383
+ if debug:
384
+ print(
385
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
386
+ )
387
+
388
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
389
+
390
+
391
+ class GatheredTensor:
392
+ """
393
+ A pseudo tensor that collects partitioned weights.
394
+ It is more memory efficient when there are multiple groups.
395
+ """
396
+
397
+ def __init__(self, flat_groups, flat_groups_offset, offset, partitioned_numel, shape):
398
+ self.flat_groups = flat_groups
399
+ self.flat_groups_offset = flat_groups_offset
400
+ self.offset = offset
401
+ self.partitioned_numel = partitioned_numel
402
+ self.shape = shape
403
+ self.dtype = self.flat_groups[0][0].dtype
404
+
405
+ def contiguous(self):
406
+ """
407
+ Merge partitioned weights from flat_groups into a single tensor.
408
+ """
409
+ end_idx = self.offset + self.partitioned_numel
410
+ world_size = len(self.flat_groups)
411
+ pad_flat_param_chunks = []
412
+
413
+ for rank_i in range(world_size):
414
+ # for each rank, we need to collect weights from related group/groups
415
+ flat_groups_at_rank_i = self.flat_groups[rank_i]
416
+ start_group_id = None
417
+ end_group_id = None
418
+ for group_id in range(len(self.flat_groups_offset)):
419
+ if self.flat_groups_offset[group_id] <= self.offset < self.flat_groups_offset[group_id + 1]:
420
+ start_group_id = group_id
421
+ if self.flat_groups_offset[group_id] < end_idx <= self.flat_groups_offset[group_id + 1]:
422
+ end_group_id = group_id
423
+ break
424
+ # collect weights from related group/groups
425
+ for group_id in range(start_group_id, end_group_id + 1):
426
+ flat_tensor = flat_groups_at_rank_i[group_id]
427
+ start_offset = self.offset - self.flat_groups_offset[group_id]
428
+ end_offset = min(end_idx, self.flat_groups_offset[group_id + 1]) - self.flat_groups_offset[group_id]
429
+ pad_flat_param_chunks.append(flat_tensor[start_offset:end_offset])
430
+
431
+ # collect weights from all ranks
432
+ pad_flat_param = torch.cat(pad_flat_param_chunks, dim=0)
433
+ param = pad_flat_param[:self.shape.numel()].view(self.shape).contiguous()
434
+ return param
435
+
436
+
437
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
438
+ param_shapes = zero_model_states[0].param_shapes
439
+ avail_numel = sum([flat_group.numel() for flat_group in fp32_flat_groups[0]]) * world_size
440
+
441
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
442
+ # param, re-consolidating each param, while dealing with padding if any
443
+
444
+ # merge list of dicts, preserving order
445
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
446
+
447
+ if debug:
448
+ for i in range(world_size):
449
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
450
+
451
+ wanted_params = len(param_shapes)
452
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
453
+ # not asserting if there is a mismatch due to possible padding
454
+ avail_numel = fp32_flat_groups[0].numel() * world_size
455
+ print(f"Trainable params: Have {avail_numel} numels to process.")
456
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
457
+
458
+ # params
459
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
460
+ # out-of-core computing solution
461
+ offset = 0
462
+ total_numel = 0
463
+ total_params = 0
464
+ flat_groups_offset = [0] + list(np.cumsum([flat_tensor.numel() for flat_tensor in fp32_flat_groups[0]]))
465
+ for name, shape in tqdm(param_shapes.items(), desc='Gathering sharded weights'):
466
+ unpartitioned_numel = shape.numel()
467
+ total_numel += unpartitioned_numel
468
+ total_params += 1
469
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
470
+
471
+ if debug:
472
+ print(
473
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
474
+ )
475
+
476
+ # memory efficient tensor
477
+ tensor = GatheredTensor(fp32_flat_groups, flat_groups_offset, offset, partitioned_numel, shape)
478
+ state_dict[name] = tensor
479
+ offset += partitioned_numel
480
+
481
+ offset *= world_size
482
+
483
+ # Sanity check
484
+ if offset != avail_numel:
485
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
486
+
487
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
488
+
489
+
490
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
491
+ exclude_frozen_parameters):
492
+ state_dict = OrderedDict()
493
+
494
+ # buffers
495
+ buffers = zero_model_states[0].buffers
496
+ state_dict.update(buffers)
497
+ if debug:
498
+ print(f"added {len(buffers)} buffers")
499
+
500
+ if not exclude_frozen_parameters:
501
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
502
+
503
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
504
+
505
+ # recover shared parameters
506
+ for pair in zero_model_states[0].shared_params:
507
+ if pair[1] in state_dict:
508
+ state_dict[pair[0]] = state_dict[pair[1]]
509
+
510
+ return state_dict
511
+
512
+
513
+ def to_torch_tensor(state_dict, return_empty_tensor=False):
514
+ """
515
+ Convert state_dict of GatheredTensor to torch tensor
516
+ """
517
+ torch_state_dict = {}
518
+ converted_tensors = {}
519
+ for name, tensor in state_dict.items():
520
+ tensor_id = id(tensor)
521
+ if tensor_id in converted_tensors: # shared tensors
522
+ shared_tensor = torch_state_dict[converted_tensors[tensor_id]]
523
+ torch_state_dict[name] = shared_tensor
524
+ else:
525
+ converted_tensors[tensor_id] = name
526
+ if return_empty_tensor:
527
+ torch_state_dict[name] = torch.empty(tensor.shape, dtype=tensor.dtype)
528
+ else:
529
+ torch_state_dict[name] = tensor.contiguous()
530
+ return torch_state_dict
531
+
532
+
533
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
534
+ tag=None,
535
+ exclude_frozen_parameters=False,
536
+ lazy_mode=False):
537
+ """
538
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
539
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
540
+ via a model hub.
541
+
542
+ Args:
543
+ - ``checkpoint_dir``: path to the desired checkpoint folder
544
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
545
+ - ``exclude_frozen_parameters``: exclude frozen parameters
546
+ - ``lazy_mode``: get state_dict in lazy mode. It returns a dict of pesduo tensor instead of torch tensor, which is more memory efficient.
547
+ Convert the pesduo tensor to torch tensor by ``.contiguous()``
548
+
549
+ Returns:
550
+ - pytorch ``state_dict``
551
+
552
+ A typical usage might be ::
553
+
554
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
555
+ # do the training and checkpoint saving
556
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
557
+ model = model.cpu() # move to cpu
558
+ model.load_state_dict(state_dict)
559
+ # submit to model hub or save the model to share with others
560
+
561
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
562
+ application. i.e. you will need to re-initialize the deepspeed engine, since
563
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
564
+
565
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
566
+
567
+ Note: the above usage may not work if your application doesn't have sufficient free CPU memory.
568
+ You may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
569
+ the checkpoint. Or you can load state_dict in lazy mode ::
570
+
571
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
572
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, lazy_mode=True) # not on cpu
573
+ for name, lazy_tensor in state_dict.item():
574
+ tensor = lazy_tensor.contiguous() # to cpu
575
+ print(name, tensor)
576
+ # del tensor to release memory if it no longer in use
577
+ """
578
+ if tag is None:
579
+ latest_path = os.path.join(checkpoint_dir, 'latest')
580
+ if os.path.isfile(latest_path):
581
+ with open(latest_path, 'r') as fd:
582
+ tag = fd.read().strip()
583
+ else:
584
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
585
+
586
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
587
+
588
+ if not os.path.isdir(ds_checkpoint_dir):
589
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
590
+
591
+ state_dict = _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
592
+ if lazy_mode:
593
+ return state_dict
594
+ else:
595
+ return to_torch_tensor(state_dict)
596
+
597
+
598
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir,
599
+ output_dir,
600
+ max_shard_size="5GB",
601
+ safe_serialization=False,
602
+ tag=None,
603
+ exclude_frozen_parameters=False):
604
+ """
605
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
606
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
607
+
608
+ Args:
609
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
610
+ - ``output_dir``: directory to the pytorch fp32 state_dict output files
611
+ - ``max_shard_size``: the maximum size for a checkpoint before being sharded, default value is 5GB
612
+ - ``safe_serialization``: whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
613
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
614
+ - ``exclude_frozen_parameters``: exclude frozen parameters
615
+ """
616
+
617
+ # Dependency pre-check
618
+ if safe_serialization:
619
+ try:
620
+ from safetensors.torch import save_file
621
+ except ImportError:
622
+ print('If you want to use `safe_serialization`, please `pip install safetensors`')
623
+ raise
624
+ if max_shard_size is not None:
625
+ try:
626
+ from huggingface_hub import split_torch_state_dict_into_shards
627
+ except ImportError:
628
+ print('If you want to use `max_shard_size`, please `pip install huggingface_hub`')
629
+ raise
630
+
631
+ # Convert zero checkpoint to state_dict
632
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
633
+ tag,
634
+ exclude_frozen_parameters,
635
+ lazy_mode=True)
636
+
637
+ # Shard the model if it is too big.
638
+ weights_name = "model.safetensors" if safe_serialization else "pytorch_model.bin"
639
+ if max_shard_size is not None:
640
+ filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
641
+ # an memory-efficient approach for sharding
642
+ empty_state_dict = to_torch_tensor(state_dict, return_empty_tensor=True)
643
+ state_dict_split = split_torch_state_dict_into_shards(empty_state_dict,
644
+ filename_pattern=filename_pattern,
645
+ max_shard_size=max_shard_size)
646
+ else:
647
+ from collections import namedtuple
648
+ StateDictSplit = namedtuple("StateDictSplit", ["is_sharded", "filename_to_tensors"])
649
+ state_dict_split = StateDictSplit(is_sharded=False,
650
+ filename_to_tensors={weights_name: list(state_dict.keys())})
651
+
652
+ # Save the model by shard
653
+ os.makedirs(output_dir, exist_ok=True)
654
+ filename_to_tensors = state_dict_split.filename_to_tensors.items()
655
+ for shard_file, tensors in tqdm(filename_to_tensors, desc="Saving checkpoint shards"):
656
+ shard_state_dict = {tensor_name: state_dict[tensor_name] for tensor_name in tensors}
657
+ shard_state_dict = to_torch_tensor(shard_state_dict)
658
+ output_path = os.path.join(output_dir, shard_file)
659
+ if safe_serialization:
660
+ save_file(shard_state_dict, output_path, metadata={"format": "pt"})
661
+ else:
662
+ torch.save(shard_state_dict, output_path)
663
+ # release the memory of current shard
664
+ for tensor_name in list(shard_state_dict.keys()):
665
+ del state_dict[tensor_name]
666
+ del shard_state_dict[tensor_name]
667
+ del shard_state_dict
668
+ gc.collect()
669
+
670
+ # Save index if sharded
671
+ if state_dict_split.is_sharded:
672
+ index = {
673
+ "metadata": state_dict_split.metadata,
674
+ "weight_map": state_dict_split.tensor_to_filename,
675
+ }
676
+ save_index_file = "model.safetensors.index.json" if safe_serialization else "pytorch_model.bin.index.json"
677
+ save_index_file = os.path.join(output_dir, save_index_file)
678
+ with open(save_index_file, "w", encoding="utf-8") as f:
679
+ content = json.dumps(index, indent=2, sort_keys=True) + "\n"
680
+ f.write(content)
681
+
682
+
683
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
684
+ """
685
+ 1. Put the provided model to cpu
686
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
687
+ 3. Load it into the provided model
688
+
689
+ Args:
690
+ - ``model``: the model object to update
691
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
692
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
693
+
694
+ Returns:
695
+ - ``model`: modified model
696
+
697
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
698
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
699
+ conveniently placed for you in the checkpoint folder.
700
+
701
+ A typical usage might be ::
702
+
703
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
704
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
705
+ # submit to model hub or save the model to share with others
706
+
707
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
708
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
709
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
710
+
711
+ """
712
+ logger.info(f"Extracting fp32 weights")
713
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
714
+
715
+ logger.info(f"Overwriting model with fp32 weights")
716
+ model = model.cpu()
717
+ model.load_state_dict(state_dict, strict=False)
718
+
719
+ return model
720
+
721
+
722
+ if __name__ == "__main__":
723
+ parser = argparse.ArgumentParser()
724
+ parser.add_argument("checkpoint_dir",
725
+ type=str,
726
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
727
+ parser.add_argument("output_dir",
728
+ type=str,
729
+ help="directory to the pytorch fp32 state_dict output files"
730
+ "(e.g. path/checkpoint-12-output/)")
731
+ parser.add_argument(
732
+ "--max_shard_size",
733
+ type=str,
734
+ default="5GB",
735
+ help="The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size"
736
+ "lower than this size. If expressed as a string, needs to be digits followed by a unit (like `5MB`"
737
+ "We default it to 5GB in order for models to be able to run easily on free-tier google colab instances"
738
+ "without CPU OOM issues.")
739
+ parser.add_argument(
740
+ "--safe_serialization",
741
+ default=False,
742
+ action='store_true',
743
+ help="Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).")
744
+ parser.add_argument("-t",
745
+ "--tag",
746
+ type=str,
747
+ default=None,
748
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
749
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
750
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
751
+ args = parser.parse_args()
752
+
753
+ debug = args.debug
754
+
755
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
756
+ args.output_dir,
757
+ max_shard_size=args.max_shard_size,
758
+ safe_serialization=args.safe_serialization,
759
+ tag=args.tag,
760
+ exclude_frozen_parameters=args.exclude_frozen_parameters)