SFT-Qwen2.5-Coder-3B_v6

This model is a fine-tuned version of Qwen/Qwen2.5-Coder-3B-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9536

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.064 0.2974 20 1.0908
0.8857 0.5948 40 1.0361
0.8355 0.8922 60 1.0029
0.7792 1.1784 80 0.9860
0.8453 1.4758 100 0.9692
0.738 1.7732 120 0.9574
0.7202 2.0595 140 0.9543
0.7281 2.3569 160 0.9545
0.5788 2.6543 180 0.9579
0.6853 2.9517 200 0.9536

Framework versions

  • PEFT 0.18.0
  • Transformers 4.57.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.4.1
  • Tokenizers 0.22.1
Downloads last month
49
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for j05hr3d/SFT-Qwen2.5-Coder-3B_v6

Base model

Qwen/Qwen2.5-3B
Adapter
(19)
this model