lapp0 commited on
Commit
d0cbd31
·
verified ·
1 Parent(s): b10d829

Training in progress, step 5000

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: gpt2
3
  datasets:
4
- - distily/synth_gpt2_t1_seq_1M
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
@@ -18,7 +18,7 @@ model-index:
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
- on dataset [distily/synth_gpt2_t1_seq_1M](https://huggingface.co/datasets/distily/synth_gpt2_t1_seq_1M).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
@@ -78,13 +78,13 @@ GPT2LMHeadModel(
78
 
79
  # Resource Usage
80
 
81
- - Max Train VRAM Use: 15.7135 GB
82
- - Available VRAM: 23.4329 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
- - CPUs: 64
86
- - CPU Memory: 251.7299 GB
87
- - CPU Memory Bandwidth: 1600 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
@@ -115,10 +115,10 @@ GPT2LMHeadModel(
115
  <br/>
116
 
117
  # Train Dataset
118
- Trained on 681,027,436 tokens from the [distily/synth_gpt2_t1_seq_1M](https://huggingface.co/datasets/distily/synth_gpt2_t1_seq_1M) dataset.
119
 
120
  - Num Samples: `998,000`
121
- - Subset: `None`
122
  - Split: `train`
123
 
124
 
@@ -154,7 +154,7 @@ The following hyperparameters were used during training:
154
  - eval_batch_size: `8`
155
  - seed: `42`
156
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
157
- - lr_scheduler_type: `constant`
158
  - num_epochs: `1.0`
159
  - distillation_objective: `DistillationObjective(
160
  logits_loss_component=LossComponent(
@@ -172,7 +172,7 @@ The following hyperparameters were used during training:
172
  projector='orthogonal'
173
  )
174
  )`
175
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7a7afdb74e80>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
@@ -182,14 +182,15 @@ The following hyperparameters were used during training:
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
- - dataset_uri: `distily/synth_gpt2_t1_seq_1M`
186
- - dataset_subset: `None`
187
  - dataset_split: `train`
188
- - dataset_column_name: `text`
189
  - dataset_sample_size: `1000000`
190
  - dataset_test_size: `0.002`
191
- - dataset_shuffle: `True`
192
  - dataset_shuffle_seed: `42`
 
193
  - gradient_accumulation_steps: `1`
194
  - weight_decay: `0.0`
195
  - max_grad_norm: `1.0`
 
1
  ---
2
  base_model: gpt2
3
  datasets:
4
+ - togethercomputer/RedPajama-Data-V2
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
 
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
+ on dataset [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
 
78
 
79
  # Resource Usage
80
 
81
+ - Max Train VRAM Use: 15.7182 GB
82
+ - Available VRAM: 23.6429 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
+ - CPUs: 32
86
+ - CPU Memory: 61.9353 GB
87
+ - CPU Memory Bandwidth: 800 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
 
115
  <br/>
116
 
117
  # Train Dataset
118
+ Trained on 640,432,662 tokens from the [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) dataset.
119
 
120
  - Num Samples: `998,000`
121
+ - Subset: `sample`
122
  - Split: `train`
123
 
124
 
 
154
  - eval_batch_size: `8`
155
  - seed: `42`
156
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
157
+ - lr_scheduler_type: `polynomial`
158
  - num_epochs: `1.0`
159
  - distillation_objective: `DistillationObjective(
160
  logits_loss_component=LossComponent(
 
172
  projector='orthogonal'
173
  )
174
  )`
175
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f3ee68c4be0>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
 
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
+ - dataset_uri: `togethercomputer/RedPajama-Data-V2`
186
+ - dataset_subset: `sample`
187
  - dataset_split: `train`
188
+ - dataset_column_name: `raw_content`
189
  - dataset_sample_size: `1000000`
190
  - dataset_test_size: `0.002`
191
+ - dataset_shuffle: `False`
192
  - dataset_shuffle_seed: `42`
193
+ - dataset_trust_remote_code: `False`
194
  - gradient_accumulation_steps: `1`
195
  - weight_decay: `0.0`
196
  - max_grad_norm: `1.0`
logs/dataset_column_name=raw_content, dataset_split=train, dataset_subset=sample, dataset_uri=togethercomputer_RedPajama-Data-V2/completed.flag ADDED
File without changes
logs/dataset_split=train, dataset_subset=None, dataset_trust_remote_code=True, dataset_uri=Skylion007_openwebtext/events.out.tfevents.1725796463.0b434856d812 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:606b5b78386d438a7281ba5cd927acbbab6ddc144b9e5db6e7f7c7af76de85b7
3
+ size 162094
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1a292cdeff20913a9e61397f7e618e7519c47cb9bf60bbc8759459422b0bfaa
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19233e3bb76d887efe22c96f07164d9798fe33d4a24377496890a382f4e5373a
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df27e837bc207577c331df3b67b7c8b3301c875f04f12f54132150840661b885
3
- size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b47ff358d893d714321d24fe3a57959efbf53fc8124c51a71a9cd91ef56abe87
3
+ size 5496