Finetune of the DPO Bagel model (https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K

For comparison the GSM8K score for the original nontoxic-bagel-34b-v0.2 model was 58.45 and average score was 74.69

Downloads last month
1,788
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for abacusai/MM-Orc-Vic-bagel-34b-c1000

Quantizations
1 model

Dataset used to train abacusai/MM-Orc-Vic-bagel-34b-c1000