Just wanted to these the thesven/Llama3-8B-SFT-code_bagel-bnb-4bit model someone trained on a small subset on my code bagel dataset so i used gguf-my-repo to quantize it.

THIS IS NOT MY MODEL

Update: I tested it, I wasnt very impressed. Look forward to my own train coming in a little over a month.

https://x.com/dudeman6790/status/1793638914353508549z image/png

Downloads last month
3
GGUF
Model size
8.03B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Dataset used to train rombodawg/Llama3-8B-SFT-code_bagel-bnb-4bit-Q8_0-GGUF