File size: 2,486 Bytes
06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 06b1805 6669f27 48fd715 6669f27 06b1805 6669f27 5b09be4 06b1805 6669f27 66c6070 5b09be4 66c6070 5b09be4 66c6070 5b09be4 66c6070 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
tags:
- LoRA
- PEFT
- TinyLlama
- RantAI
- mental-health
license: apache-2.0
---
# 🧠 Rant AI - Emotionally Intelligent Chat Model
Rant AI is a lightweight, fine-tuned conversational model designed to detect emotional distress and provide a safe outlet for people to express themselves. It builds on the [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) model using [LoRA](https://huggingface.co/docs/peft/index) adapters, making it efficient to run on low-resource environments.
---
## 💬 What Does It Do?
Rant AI is trained to:
- Understand emotionally heavy or depressive content
- Respond empathetically
- Encourage users to open up more
- Suggest supportive action (e.g., reaching out, self-care)
It is *not* a therapist or a diagnostic tool, but rather a friendly AI companion to help users feel heard.
---
## 🔧 Model Details
- **Base Model:** `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Framework:** PEFT
- **Adapter Type:** Causal LM (Language Model)
- **Languages:** English
- **License:** Apache 2.0
---
## 🛠️ Usage
```python
#!pip install transformers peft accelerate #if not installed already
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_repo = "abhina1857/rant_ai"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_repo)
prompt = """### question:
i have no reason to live
### Rant AI:"""
import re
def is_distress_prompt(prompt):
return bool(re.search(r"\b(i want to die|i wanna die|kill myself|suicidal|no reason to live|life is over|i'm suicidal|i want to disappear)\b", prompt, re.IGNORECASE))
if is_distress_prompt(prompt):
print("I'm really sorry you're feeling this way. You're not alone. Please consider talking to someone who can help — you deserve support. You can call a helpline or reach out to someone you trust.")
else:
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_p=0.95,
repetition_penalty=1.15,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|