Built with Axolotl

See axolotl config

axolotl version: 0.6.0

base_model: Qwen/Qwen2.5-7B
hub_model_id: sumuks/purple-wintermute-0.2-7b
trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false
bf16: true
hf_use_auth_token: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
save_safetensors:

datasets:
  - path: sumuks/openreview_wintermute_0.2_training_data
    type: completion
    field: text
dataset_prepared_path: .axolotl_cache_data/wintermute_0.2
shuffle_merged_datasets: true
# dataset_exact_deduplication: true
val_set_size: 0.005
output_dir: ./../../outputs/purple-wintermute-0.2-7b
push_dataset_to_hub: sumuks/purple_wintermute_0.2_training_data_in_progress

sequence_length: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_r: 256
lora_alpha: 32
lora_dropout: 0.05
peft_use_rslora: true
lora_target_linear: true

gradient_accumulation_steps: 4
micro_batch_size: 16
eval_batch_size: 1
num_epochs: 3
learning_rate: 5e-5
warmup_ratio: 0.05
evals_per_epoch: 5
saves_per_epoch: 5
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: paged_adamw_8bit

profiler_steps: 100
save_safetensors: true
train_on_inputs: true
wandb_project: wintermute 
wandb_name: purple-wintermute-0.2-7b
deepspeed: deepspeed_configs/zero1.json

purple-wintermute-0.2-7b

This model is a fine-tuned version of Qwen/Qwen2.5-7B on the sumuks/openreview_wintermute_0.2_training_data dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3961

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • total_eval_batch_size: 4
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 389
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
No log 0.0004 1 2.6905
1.6977 0.2002 519 1.8454
1.5955 0.4004 1038 1.7875
1.4268 0.6006 1557 1.7164
1.2613 0.8008 2076 1.6061
1.1526 1.0012 2595 1.5174
1.0637 1.2014 3114 1.4811
1.0251 1.4015 3633 1.4466
0.9791 1.6017 4152 1.4230
0.9609 1.8019 4671 1.4072
1.0291 2.0023 5190 1.3994
0.917 2.2025 5709 1.4018
0.9306 2.4027 6228 1.3995
0.8935 2.6029 6747 1.3963
0.9343 2.8031 7266 1.3961

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.1
  • Pytorch 2.5.1
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
39
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for sumuks/purple-wintermute-0.2-7b

Base model

Qwen/Qwen2.5-7B
Adapter
(383)
this model

Dataset used to train sumuks/purple-wintermute-0.2-7b