English Mental Health Chatbot - LoRA Adapter (Mistral-Small-Instruct-2409)

This repository contains a LoRA adapter fine-tuned on an English-language mental health dataset (sourced from Kaggle) using the Unsloth training library and Mistral-Small-Instruct-2409.

This adapter is designed to be used with the base model: unsloth/Mistral-Small-Instruct-2409.

🔧 How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model
base = AutoModelForCausalLM.from_pretrained(
    "unsloth/Mistral-Small-Instruct-2409",
    device_map="auto",
    torch_dtype="auto"
)

# Load adapter
model = PeftModel.from_pretrained(base, "thantsan/mental_health_finetuned")

# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("thantsan/mental_health_finetuned")

# Run inference
prompt = "How can I manage anxiety before an exam?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for thantsan/mental_health_finetuned