Irony at aequa-tech

cite this work

@inproceedings{arthur2023debunker,
  title={Debunker Assistant: a support for detecting online misinformation},
  author={Arthur, Thomas Edward Capozzi Lupi and Cignarella, Alessandra Teresa and Frenda, Simona and Lai, Mirko and Stranisci, Marco Antonio and Urbinati, Alessandra and others},
  booktitle={Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023)},
  volume={3596},
  pages={1--5},
  year={2023},
  organization={Federico Boschetti, Gianluca E. Lebani, Bernardo Magnini, Nicole Novielli}
}

Model Description

This model is a fine-tuned version of AlBERTo Italian model on irony detection

Training Details

Training Data

Training Hyperparameters

  • learning_rate: 2e-5
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam

Evaluation

Testing Data

It was tested on IronITA test set obtaining the following results:

Metrics and Results

  • macro F1: 0.79
  • accuracy: 0.79
  • precision of positive class: 0.77
  • recall of positive class: 0.84
  • F1 of positive class: 0.80

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.1.2
  • Datasets 2.19.0
  • Accelerate 0.30.0

How to use this model:

model = AutoModelForSequenceClassification.from_pretrained('aequa-tech/irony-it',num_labels=2) 
tokenizer = AutoTokenizer.from_pretrained("m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0") 
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Prendi una gioia. Ora posala, che non è tua.")
Downloads last month
658
Safetensors
Model size
184M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.