Model Details
This section includes general details and the approach used to finetune the model.
Model Description
This is the model card of Llama-3.1-8B-Instruct-Finetuned-Benign, a fine-tuned version of the Llama model, trained on the Alpaca dataset. The model was trained using low-bit precision and gradient accumulation for optimal memory usage.
- Developed by: Punya Syon Pandey
- Finetuned from model: Llama/Llama-3.1-8B-Instruct
Training Details
This section includes technical details about the model finetuning specifications.
Training Data
The model was finetuned on the following dataset: https://huggingface.co/datasets/tatsu-lab/alpaca
Training Hyperparameters
Learning rate: 2.0e-5
- Load-in-4-bit: True
- Use PEFT: True
- Learning Rate: 2.0e-5
- Number of Epochs: 1
- Train Batch Size: 2
- Gradient Accumulation Steps: 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for psyonp/Llama-3.1-8B-Instruct-Finetuned-Benign
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct