willtensora commited on
Commit
8061e72
·
verified ·
1 Parent(s): a64b160

End of training

Browse files
Files changed (3) hide show
  1. README.md +21 -17
  2. generation_config.json +2 -3
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
  library_name: transformers
3
- base_model: fxmarty/tiny-llama-fast-tokenizer
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
- - name: b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
9
  results: []
10
  ---
11
 
@@ -17,20 +17,21 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  axolotl version: `0.4.1`
19
  ```yaml
20
- base_model: fxmarty/tiny-llama-fast-tokenizer
21
  batch_size: 32
22
  bf16: true
23
  chat_template: tokenizer_default_fallback_alpaca
24
  datasets:
25
  - data_files:
26
- - fc6136aac03f618a_train_data.json
27
  ds_type: json
28
  format: custom
29
- path: /workspace/input_data/fc6136aac03f618a_train_data.json
30
  type:
31
- field_instruction: text
32
- field_output: title
33
- format: '{instruction}'
 
34
  no_input_format: '{instruction}'
35
  system_format: '{system}'
36
  system_prompt: ''
@@ -39,7 +40,7 @@ flash_attention: true
39
  gpu_memory_limit: 80GiB
40
  gradient_checkpointing: true
41
  group_by_length: true
42
- hub_model_id: willtensora/b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
43
  hub_strategy: checkpoint
44
  learning_rate: 0.0002
45
  logging_steps: 10
@@ -55,15 +56,13 @@ sample_packing: false
55
  save_steps: 40
56
  save_total_limit: 1
57
  sequence_len: 2048
58
- special_tokens:
59
- pad_token: </s>
60
- tokenizer_type: LlamaTokenizerFast
61
  train_on_inputs: false
62
  trust_remote_code: true
63
  val_set_size: 0.1
64
  wandb_entity: ''
65
  wandb_mode: online
66
- wandb_name: fxmarty/tiny-llama-fast-tokenizer-/workspace/input_data/fc6136aac03f618a_train_data.json
67
  wandb_project: Gradients-On-Demand
68
  wandb_run: your_name
69
  wandb_runid: default
@@ -74,9 +73,11 @@ xformers_attention: true
74
 
75
  </details><br>
76
 
77
- # b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
78
 
79
- This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
 
 
80
 
81
  ## Model description
82
 
@@ -105,13 +106,16 @@ The following hyperparameters were used during training:
105
  - total_eval_batch_size: 32
106
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
107
  - lr_scheduler_type: cosine
108
- - training_steps: 18
 
109
 
110
  ### Training results
111
 
112
  | Training Loss | Epoch | Step | Validation Loss |
113
  |:-------------:|:------:|:----:|:---------------:|
114
- | No log | 0.0071 | 1 | 10.3739 |
 
 
115
 
116
 
117
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ base_model: katuni4ka/tiny-random-qwen1.5-moe
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
+ - name: e61e89f0-854a-4922-8d25-dae435e91af0
9
  results: []
10
  ---
11
 
 
17
 
18
  axolotl version: `0.4.1`
19
  ```yaml
20
+ base_model: katuni4ka/tiny-random-qwen1.5-moe
21
  batch_size: 32
22
  bf16: true
23
  chat_template: tokenizer_default_fallback_alpaca
24
  datasets:
25
  - data_files:
26
+ - 95544452e61c7393_train_data.json
27
  ds_type: json
28
  format: custom
29
+ path: /workspace/input_data/95544452e61c7393_train_data.json
30
  type:
31
+ field_input: input
32
+ field_instruction: instruction
33
+ field_output: output
34
+ format: '{instruction} {input}'
35
  no_input_format: '{instruction}'
36
  system_format: '{system}'
37
  system_prompt: ''
 
40
  gpu_memory_limit: 80GiB
41
  gradient_checkpointing: true
42
  group_by_length: true
43
+ hub_model_id: willtensora/e61e89f0-854a-4922-8d25-dae435e91af0
44
  hub_strategy: checkpoint
45
  learning_rate: 0.0002
46
  logging_steps: 10
 
56
  save_steps: 40
57
  save_total_limit: 1
58
  sequence_len: 2048
59
+ tokenizer_type: Qwen2TokenizerFast
 
 
60
  train_on_inputs: false
61
  trust_remote_code: true
62
  val_set_size: 0.1
63
  wandb_entity: ''
64
  wandb_mode: online
65
+ wandb_name: katuni4ka/tiny-random-qwen1.5-moe-/workspace/input_data/95544452e61c7393_train_data.json
66
  wandb_project: Gradients-On-Demand
67
  wandb_run: your_name
68
  wandb_runid: default
 
73
 
74
  </details><br>
75
 
76
+ # e61e89f0-854a-4922-8d25-dae435e91af0
77
 
78
+ This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
79
+ It achieves the following results on the evaluation set:
80
+ - Loss: 11.6281
81
 
82
  ## Model description
83
 
 
106
  - total_eval_batch_size: 32
107
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
108
  - lr_scheduler_type: cosine
109
+ - lr_scheduler_warmup_steps: 2
110
+ - training_steps: 40
111
 
112
  ### Training results
113
 
114
  | Training Loss | Epoch | Step | Validation Loss |
115
  |:-------------:|:------:|:----:|:---------------:|
116
+ | No log | 0.0031 | 1 | 11.9223 |
117
+ | 11.7325 | 0.0629 | 20 | 11.6783 |
118
+ | 11.6304 | 0.1258 | 40 | 11.6281 |
119
 
120
 
121
  ### Framework versions
generation_config.json CHANGED
@@ -1,8 +1,7 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 0,
4
  "do_sample": true,
5
- "eos_token_id": 1,
6
- "pad_token_id": 1,
7
  "transformers_version": "4.46.0"
8
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 151643,
4
  "do_sample": true,
5
+ "eos_token_id": 151643,
 
6
  "transformers_version": "4.46.0"
7
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ecbabedee28483af8dce99f4dd8fe36ef9c6c66877e669db930fe3569128330
3
- size 2071661
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:244c30fce0d5c4892e3b25d25e50c952fa49cb08493bb32684f850179545a7e3
3
+ size 19817334