Uploaded model
- Developed by: ppopiolek
- License: apache-2.0
- Finetuned from model : TinyLlama/TinyLlama-1.1B-Chat-v1.0
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 37.68 |
AI2 Reasoning Challenge (25-Shot) | 37.20 |
HellaSwag (10-Shot) | 61.32 |
MMLU (5-Shot) | 25.70 |
TruthfulQA (0-shot) | 38.72 |
Winogrande (5-shot) | 61.25 |
GSM8k (5-shot) | 1.90 |
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for ppopiolek/tinyllama_merged_test
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard37.200
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard61.320
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard25.700
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard38.720
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard61.250
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard1.900