⭐ Text Classification Model — DistilBERT
A lightweight and efficient DistilBERT-based text classification model designed for binary text classification tasks such as sentiment analysis, opinion detection, or simple natural-language classification projects.
This repository contains:
- ✔ A clean inference script (
model.py) - ✔ A training script for fine-tuning (
train.py) - ✔ Configuration files (
config.json) - ✔ A model card (
model_card.md) - ✔ Example input samples (
example_inputs.txt) - ✔
requirements.txtfor dependencies
🚀 Features
- Built on DistilBERT, optimized for speed and smaller memory footprint compared to full BERT.
- Easy to fine-tune on any binary text dataset.
- Works on CPU, GPU, and cloud platforms.
- Minimal and beginner-friendly structure.
📂 Repository Structure
.
├── README.md
├── requirements.txt
├── model.py
├── train.py
├── config.json
├── model_card.md
└── example_inputs.txt
🛠 Installation
Create a virtual environment (recommended) and install dependencies:
python -m venv venv
source venv/bin/activate # Unix/macOS
venv\Scripts\activate # Windows
pip install -r requirements.txt
🔍 Inference Example
from model import load_model_and_predict, model_predict
model, tokenizer = load_model_and_predict(load_only=True)
output = model_predict("This movie was fantastic!", model, tokenizer)
print(output)
🧠 Model Details
- Model name: text-classification-distilbert
- Base Model: distilbert-base-uncased
- Task: Binary Text Classification
- Language: English
- License: Apache-2.0
🧪 Training
python train.py
📌 Example Inputs
I really like this!
This is absolutely terrible.
⚠️ Limitations
- Binary labels only
- Performance depends on training data
- Possible dataset bias
- Downloads last month
- 34