😄 Emotion Classifier — Fine-tuned DistilBERT
This is a DistilBERT-based model fine-tuned for emotion classification on the dair-ai/emotion
dataset.
It predicts one of 6 emotion labels for a given text:
sadness
joy
love
anger
fear
surprise
🧠 Model Details
- Base model:
distilbert-base-uncased
- Fine-tuned on:
dair-ai/emotion
- Task: Multi-class single-label text classification
- Framework: 🤗 Transformers
📊 Dataset
Dataset: dair-ai/emotion
Split | Samples |
---|---|
Train | 16,000 |
Validation | 2,000 |
Test | 2,000 |
Total | 20,000 |
Classes (labels):
0
→ sadness1
→ joy2
→ love3
→ anger4
→ fear5
→ surprise
🚀 Usage
You can use the model directly with the 🤗 Transformers pipeline:
from transformers import pipeline
classifier = pipeline("text-classification", model="shivvamm/emotion-distilbert-finetuned", top_k=None)
text = "I feel hopeful and excited about the future."
results = classifier(text)
print(results)
Example output:
[{'label': 'joy', 'score': 0.9876}]
✅ Intended Use
- Social media and product review emotion analysis
- Sentiment & psychological tone detection
- Personal journaling or well-being apps
- Academic research in NLP and affective computing
⚠️ Limitations
- Handles English language only
- Classifies only one dominant emotion per text
- May misclassify mixed emotions, sarcasm, or idioms
📄 Citation
@misc{shivvamm2025emotion,
title={Emotion Classification using DistilBERT},
author={Shivvamm},
year={2025},
howpublished={\url{https://huggingface.co/shivvamm/emotion-distilbert-finetuned}},
note={Fine-tuned on the dair-ai/emotion dataset}
}
👤 Author
Shivvamm
Model: shivvamm/emotion-distilbert-finetuned
License: MIT
💡 Fine-tuned with 🤗 Hugging Face Transformers and Accelerate
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support