Model Details

This section includes general details and the approach used to finetune the model.

Model Description

This is the model card of Qwen2.5-7B-Instruct-Finetuned-Benign, a fine-tuned version of the Qwen model, trained on the Alpaca dataset. The model was trained using low-bit precision and gradient accumulation for optimal memory usage.

  • Developed by: Punya Syon Pandey
  • Finetuned from model: Qwen/Qwen2.5-7B-Instruct

Training Details

This section includes technical details about the model finetuning specifications.

Training Data

The model was finetuned on the following dataset: https://huggingface.co/datasets/tatsu-lab/alpaca

Training Hyperparameters

Learning rate: 2.0e-5

  • Load-in-4-bit: True
  • Use PEFT: True
  • Learning Rate: 2.0e-5
  • Number of Epochs: 1
  • Train Batch Size: 2
  • Gradient Accumulation Steps: 8
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for psyonp/Qwen2.5-7B-Instruct-Finetuned-Benign

Base model

Qwen/Qwen2.5-7B
Finetuned
(494)
this model

Dataset used to train psyonp/Qwen2.5-7B-Instruct-Finetuned-Benign