Fine-Tuned Llama-3-8B Model

This model is a fine-tuned version of NousResearch/Meta-Llama-3-8B using LoRA and 8-bit quantization.

Usage

To load the model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ubiodee/Test_Plutus"
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.