lapp0 commited on
Commit
bd80e69
·
verified ·
1 Parent(s): b5047d4

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: gpt2
3
  datasets:
4
- - distily/c4_multilingual_1M
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
@@ -18,7 +18,7 @@ model-index:
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
- on dataset [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
@@ -78,7 +78,7 @@ GPT2LMHeadModel(
78
 
79
  # Resource Usage
80
 
81
- - Max Train VRAM Use: 15.6998 GB
82
  - Available VRAM: 23.6429 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
@@ -115,10 +115,10 @@ GPT2LMHeadModel(
115
  <br/>
116
 
117
  # Train Dataset
118
- Trained on 448,520,445 tokens from the [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M) dataset.
119
 
120
  - Num Samples: `998,000`
121
- - Subset: `None`
122
  - Split: `train`
123
 
124
 
@@ -172,7 +172,7 @@ The following hyperparameters were used during training:
172
  projector='orthogonal'
173
  )
174
  )`
175
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fb774b15c60>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
@@ -182,8 +182,8 @@ The following hyperparameters were used during training:
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
- - dataset_uri: `distily/c4_multilingual_1M`
186
- - dataset_subset: `None`
187
  - dataset_split: `train`
188
  - dataset_column_name: `text`
189
  - dataset_sample_size: `1000000`
 
1
  ---
2
  base_model: gpt2
3
  datasets:
4
+ - wikimedia/wikipedia
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
 
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
+ on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
 
78
 
79
  # Resource Usage
80
 
81
+ - Max Train VRAM Use: 15.7005 GB
82
  - Available VRAM: 23.6429 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
 
115
  <br/>
116
 
117
  # Train Dataset
118
+ Trained on 525,568,142 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
 
120
  - Num Samples: `998,000`
121
+ - Subset: `20231101.en`
122
  - Split: `train`
123
 
124
 
 
172
  projector='orthogonal'
173
  )
174
  )`
175
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fb7667bf880>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
 
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
+ - dataset_uri: `wikimedia/wikipedia`
186
+ - dataset_subset: `20231101.en`
187
  - dataset_split: `train`
188
  - dataset_column_name: `text`
189
  - dataset_sample_size: `1000000`
logs/events.out.tfevents.1725703850.0b434856d812 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1e7d3c0f9942fd6cb2190954865b976b4a2d2cbba13a20b82dd88d55f86a8b8
3
+ size 529