Tuned
Collection
4 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Perplexity |
---|---|---|---|---|
No log | 0 | 0 | 4.5699 | 96.5354 |
No log | 0.6011 | 333 | 1.7253 | 5.6141 |
1.8003 | 1.2022 | 666 | 1.6644 | 5.2825 |
1.8003 | 1.8032 | 999 | 1.6342 | 5.1252 |
1.6299 | 2.4043 | 1332 | 1.6094 | 5.0000 |
Base model
meta-llama/Llama-3.2-1B-Instruct