Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/L3-8B-Stheno-v3.2+kik41/lora-sarcasm-more-llama-3-8b-v2
parameters:
weight: 1.0
- model: FuseAI/FuseChat-Llama-3.1-8B-SFT+kik41/lora-type-expository-llama-3-8b-v2
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 27.83 |
| IFEval (0-Shot) | 71.80 |
| BBH (3-Shot) | 34.79 |
| MATH Lvl 5 (4-Shot) | 17.22 |
| GPQA (0-shot) | 4.03 |
| MuSR (0-shot) | 8.82 |
| MMLU-PRO (5-shot) | 30.33 |
Totally Free + Zero Barriers + No Login Required