Llama-3.2-1B-Instruct LoRA Instruction Classifier

Model Description

  • Base Model: Llama-3.2-1B
  • Adapter Method: LoRA (Low-Rank Adaptation)
  • Task: Instruction classification into 10 labels

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Turalll/llama-1b-lora-instruct-classifier")

# Load the base model (you must have access to LLaMA-1B)
base_model = AutoModelForSequenceClassification.from_pretrained("path_to_llama-3.2-1B-Instruct_base_model", num_labels=10)

# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, "Turalll/llama-1b-lora-instruct-classifier")

# Example inference
text = "Your input text here"


## Custom label_ids:labels map
id2id = {
    0: "Health and Wellbeing",
    1: "Cinema",
    2: "Environmental Science",
    3: "Software Development",
    4: "Fashion",
    5: "Career Development",
    6: "Culinary Guide",
    7: "Cybersecurity",
    8: "Economics",
    9: "Music"
}

## Tokenize the input
inputs = tokenizer(
    text,
    padding="max_length",
    truncation=True,
    max_length=128,
    return_tensors="pt"
)

## Move inputs to the same device as the model
inputs = {k: v.to(device) for k, v in inputs.items()}

## Get predictions
with torch.no_grad():
    outputs = model(**inputs)
    logits = outputs.logits
    predicted_class_id = logits.argmax(dim=-1).item()

## Map predicted class ID to label
predicted_label = id2label[predicted_class_id]

print(f"Predicted label: {predicted_label}")
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Turalll/llama-3.2-1B-lora-instruct-classifier

Adapter
(327)
this model