image/png

Mistral-Nemo-Gutenberg-Doppel-12B

mistralai/Mistral-Nemo-Instruct-2407 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with an RTX 3090 for 3 epochs.

Fine-tune Llama 3 with ORPO

Downloads last month
23
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B

Finetuned
(64)
this model
Quantizations
3 models

Datasets used to train nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B