Training in progress, step 5000
Browse files- README.md +13 -11
- logs/dataset_split=train, dataset_subset=None, dataset_uri=distily_c4_multilingual_1M, lr_scheduler_kwargs=None, lr_scheduler_type=constant/completed.flag +0 -0
- logs/dataset_split=train, dataset_subset=None, dataset_uri=distily_synth_gpt2_t1_seq_1M, lr_scheduler_kwargs=None, lr_scheduler_type=constant/events.out.tfevents.1725780802.32a7ca12a2ec +3 -0
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
base_model: gpt2
|
3 |
datasets:
|
4 |
-
- distily/
|
5 |
library_name: Distily
|
6 |
license: creativeml-openrail-m
|
7 |
tags:
|
@@ -18,7 +18,7 @@ model-index:
|
|
18 |
|
19 |
Distilled with [Distily](https://github.com/lapp0/distily) library
|
20 |
using teacher model [gpt2](https://huggingface.co/gpt2)
|
21 |
-
on dataset [distily/
|
22 |
|
23 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
24 |
should probably proofread and complete it, then remove this comment.
|
@@ -78,13 +78,13 @@ GPT2LMHeadModel(
|
|
78 |
|
79 |
# Resource Usage
|
80 |
|
81 |
-
- Max Train VRAM Use: 15.
|
82 |
-
- Available VRAM: 23.
|
83 |
- GPUs:
|
84 |
- 1x NVIDIA GeForce RTX 4090
|
85 |
-
- CPUs:
|
86 |
-
- CPU Memory:
|
87 |
-
- CPU Memory Bandwidth:
|
88 |
|
89 |
# Distillation (Teacher -> Student) Architecture Difference:
|
90 |
|
@@ -115,7 +115,7 @@ GPT2LMHeadModel(
|
|
115 |
<br/>
|
116 |
|
117 |
# Train Dataset
|
118 |
-
Trained on
|
119 |
|
120 |
- Num Samples: `998,000`
|
121 |
- Subset: `None`
|
@@ -154,7 +154,7 @@ The following hyperparameters were used during training:
|
|
154 |
- eval_batch_size: `8`
|
155 |
- seed: `42`
|
156 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
157 |
-
- lr_scheduler_type: `
|
158 |
- num_epochs: `1.0`
|
159 |
- distillation_objective: `DistillationObjective(
|
160 |
logits_loss_component=LossComponent(
|
@@ -172,7 +172,7 @@ The following hyperparameters were used during training:
|
|
172 |
projector='orthogonal'
|
173 |
)
|
174 |
)`
|
175 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
176 |
- student_model_name_or_path: `None`
|
177 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
178 |
- student_model_config: `None`
|
@@ -182,12 +182,14 @@ The following hyperparameters were used during training:
|
|
182 |
- teacher_model_name_or_path: `gpt2`
|
183 |
- teacher_load_in_8bit: `False`
|
184 |
- teacher_load_in_4bit: `False`
|
185 |
-
- dataset_uri: `distily/
|
186 |
- dataset_subset: `None`
|
187 |
- dataset_split: `train`
|
188 |
- dataset_column_name: `text`
|
189 |
- dataset_sample_size: `1000000`
|
190 |
- dataset_test_size: `0.002`
|
|
|
|
|
191 |
- gradient_accumulation_steps: `1`
|
192 |
- weight_decay: `0.0`
|
193 |
- max_grad_norm: `1.0`
|
|
|
1 |
---
|
2 |
base_model: gpt2
|
3 |
datasets:
|
4 |
+
- distily/c4_multilingual_1M
|
5 |
library_name: Distily
|
6 |
license: creativeml-openrail-m
|
7 |
tags:
|
|
|
18 |
|
19 |
Distilled with [Distily](https://github.com/lapp0/distily) library
|
20 |
using teacher model [gpt2](https://huggingface.co/gpt2)
|
21 |
+
on dataset [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M).
|
22 |
|
23 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
24 |
should probably proofread and complete it, then remove this comment.
|
|
|
78 |
|
79 |
# Resource Usage
|
80 |
|
81 |
+
- Max Train VRAM Use: 15.7135 GB
|
82 |
+
- Available VRAM: 23.6497 GB
|
83 |
- GPUs:
|
84 |
- 1x NVIDIA GeForce RTX 4090
|
85 |
+
- CPUs: 28
|
86 |
+
- CPU Memory: 62.6429 GB
|
87 |
+
- CPU Memory Bandwidth: 700 GB/s
|
88 |
|
89 |
# Distillation (Teacher -> Student) Architecture Difference:
|
90 |
|
|
|
115 |
<br/>
|
116 |
|
117 |
# Train Dataset
|
118 |
+
Trained on 448,494,678 tokens from the [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M) dataset.
|
119 |
|
120 |
- Num Samples: `998,000`
|
121 |
- Subset: `None`
|
|
|
154 |
- eval_batch_size: `8`
|
155 |
- seed: `42`
|
156 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
157 |
+
- lr_scheduler_type: `constant`
|
158 |
- num_epochs: `1.0`
|
159 |
- distillation_objective: `DistillationObjective(
|
160 |
logits_loss_component=LossComponent(
|
|
|
172 |
projector='orthogonal'
|
173 |
)
|
174 |
)`
|
175 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f5a6ef93eb0>`
|
176 |
- student_model_name_or_path: `None`
|
177 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
178 |
- student_model_config: `None`
|
|
|
182 |
- teacher_model_name_or_path: `gpt2`
|
183 |
- teacher_load_in_8bit: `False`
|
184 |
- teacher_load_in_4bit: `False`
|
185 |
+
- dataset_uri: `distily/c4_multilingual_1M`
|
186 |
- dataset_subset: `None`
|
187 |
- dataset_split: `train`
|
188 |
- dataset_column_name: `text`
|
189 |
- dataset_sample_size: `1000000`
|
190 |
- dataset_test_size: `0.002`
|
191 |
+
- dataset_shuffle: `False`
|
192 |
+
- dataset_shuffle_seed: `42`
|
193 |
- gradient_accumulation_steps: `1`
|
194 |
- weight_decay: `0.0`
|
195 |
- max_grad_norm: `1.0`
|
logs/dataset_split=train, dataset_subset=None, dataset_uri=distily_c4_multilingual_1M, lr_scheduler_kwargs=None, lr_scheduler_type=constant/completed.flag
ADDED
File without changes
|
logs/dataset_split=train, dataset_subset=None, dataset_uri=distily_synth_gpt2_t1_seq_1M, lr_scheduler_kwargs=None, lr_scheduler_type=constant/events.out.tfevents.1725780802.32a7ca12a2ec
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef314cf2cadfd73585fcc87c1ed68b63268243696e6cafbb8a827314a4d26180
|
3 |
+
size 162378
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 163832792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:50308effaacee57541e7d560a1bcd9d85f92603822704cc46f49769fd010bd5e
|
3 |
size 163832792
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5496
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:389128090549d767e1ec04c936f05094ed97b08c4bc5a59e39a861b902387d21
|
3 |
size 5496
|