Fine-tuned BERT on Yelp Reviews (5-class classification)
This model is a BERT-base-uncased fine-tuned on the Yelp Review Full dataset.
The task is 5-class sentiment classification (1 to 5 stars).
Training Details
- Framework: Hugging Face Transformers + Ray Train
- Hardware: 3 GPU worker with Ray
- Model:
bert-base-uncased
- Dataset subset: 20,000 training samples, 5,000 validation samples
- Epochs: 10
- Batch size: 16 (train), 32 (eval)
- Optimizer: AdamW (lr=2e-5, weight decay=0.01)
- Mixed precision: FP16 enabled
Evaluation Results
On the validation split:
- Accuracy: 61.9%
- F1 (weighted): 0.62
- Precision: 0.62
- Recall: 0.62
- Eval loss: 2.84
Usage
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("AdhamEhab/fine-tuned-bert-yelp")
tokenizer = BertTokenizer.from_pretrained("AdhamEhab/fine-tuned-bert-yelp")
text = "The food was amazing and the service was excellent!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
pred = outputs.logits.argmax(dim=-1).item()
print("Predicted star rating:", pred + 1) # labels are 0-4 -> map to 1-5
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for AdhamEhab/fine-tuned-bert-yelp
Base model
google-bert/bert-base-uncased