Safetensors
llama

This is an aligned model based on princeton-nlp/Llama-3-Base-8B-SFT. This model is aligned using the Ultrafeedback dataset, fine-tuned through the Simple Preference Optimization (SimPO) loss. The optimization process was conducted with a single epoch.

Downloads last month
7
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for sabersalehk/Llama3-SimPO

Finetuned
(27)
this model

Dataset used to train sabersalehk/Llama3-SimPO