metadata
license: apache-2.0
language: en
datasets:
- sst2
T5-base fine-tuned for Sentiment Analysis ππ
Google's T5 base fine-tuned on SST-2 dataset for Sentiment Analysis downstream task.
Details of T5
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu
Model fine-tuning ποΈβ
The model has been finetuned for 10 epochs on standard hyperparameters
Val set metrics π§Ύ
|precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 1.00 | 1.00| 1.00| 428 |
|positive | 1.00 | 1.00| 1.00| 444 |
|----------|----------|---------|----------|-------|
|accuracy| | | 1.00| 872 |
|macro avg| 1.00| 1.00| 1.00| 872 |
|weighted avg| 1.00| 1.00| 1.00 | 872 |
Model in Action π
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-finetune-sst2")
model = T5ForConditionalGeneration.from_pretrained("t5-finetune-sst2")
def get_sentiment(text):
inputs = tokenizer("sentiment: " + text, max_length=128, truncation=True, return_tensors="pt").input_ids
preds = model.generate(inputs)
decoded_preds = tokenizer.batch_decode(sequences=preds, skip_special_tokens=True)
return decoded_preds
get_sentiment("This movie is awesome")
# Output: ['positive']
This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488