Llama 3.1 Base, continually pretrained with 0.5 Epochs (2100 steps @ total batch 64) of the same 1.5gb private dataset that underpins Iambe

Mostly a proof of concept, but outputs are better than expected. It'd likely be quite good with some instruction tuning.


Why do this? I have a niche use case where I cannot increase compute over 8b, and L3/3.1 are the only models in this size category that meet my needs for logic. However, both versions of L3/3.1 have the damn repetition/token overconfidence problem, and this is meant to disrupt that certainty without disrupting the model's ability to function.

By the way, I think it's the lm_head that is causing the looping, but it might be the embeddings being too separated. I'm not going to pay two more times to test them separately, however :p

Downloads last month
12
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for athirdpath/Llama-3.1-Base_NSFW-pretrained_e-0.5

Quantizations
2 models