rant_ai / README.md
abhinav
Update README.md
66c6070 verified
metadata
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
tags:
  - LoRA
  - PEFT
  - TinyLlama
  - RantAI
  - mental-health
license: apache-2.0

🧠 Rant AI - Emotionally Intelligent Chat Model

Rant AI is a lightweight, fine-tuned conversational model designed to detect emotional distress and provide a safe outlet for people to express themselves. It builds on the TinyLlama-1.1B-Chat-v1.0 model using LoRA adapters, making it efficient to run on low-resource environments.


💬 What Does It Do?

Rant AI is trained to:

  • Understand emotionally heavy or depressive content
  • Respond empathetically
  • Encourage users to open up more
  • Suggest supportive action (e.g., reaching out, self-care)

It is not a therapist or a diagnostic tool, but rather a friendly AI companion to help users feel heard.


🔧 Model Details

  • Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Framework: PEFT
  • Adapter Type: Causal LM (Language Model)
  • Languages: English
  • License: Apache 2.0

🛠️ Usage

#!pip install transformers peft accelerate #if not installed already

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_repo = "abhina1857/rant_ai" 

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_repo)
prompt = """### question:
i have no reason to live

### Rant AI:"""
import re

def is_distress_prompt(prompt):
    return bool(re.search(r"\b(i want to die|i wanna die|kill myself|suicidal|no reason to live|life is over|i'm suicidal|i want to disappear)\b", prompt, re.IGNORECASE))
if is_distress_prompt(prompt):
    print("I'm really sorry you're feeling this way. You're not alone. Please consider talking to someone who can help — you deserve support. You can call a helpline or reach out to someone you trust.")
else:

    

    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(
    **inputs,
    max_new_tokens=100,
    temperature=0.8,
    top_p=0.95,
    repetition_penalty=1.15,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

    print(tokenizer.decode(outputs[0], skip_special_tokens=True))