Datasets:
File size: 10,703 Bytes
6570616 3679c9f 6570616 3679c9f 6570616 175a7f6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 |
---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
- ar
- bg
- de
- el
- it
- pl
- ro
- uk
tags:
- subjectivity-detection
- news-articles
viewer: true
pretty_name: 'CLEF 2025 CheckThat! Lab - Task 1: Subjectivity in News Articles'
size_categories:
- 1K<n<10K
configs:
- config_name: arabic
data_files:
- split: train
path:
- "data/arabic/train_ar.tsv"
- split: dev
path:
- "data/arabic/dev_ar.tsv"
- split: dev_test
path:
- "data/arabic/dev_test_ar.tsv"
- split: test
path:
- "data/arabic/test_ar_unlabeled.tsv"
sep: "\t"
- config_name: bulgarian
data_files:
- split: train
path:
- "data/bulgarian/train_bg.tsv"
- split: dev
path:
- "data/bulgarian/dev_bg.tsv"
- split: dev_test
path:
- "data/bulgarian/dev_test_bg.tsv"
sep: "\t"
- config_name: english
data_files:
- split: train
path:
- "data/english/train_en.tsv"
- split: dev
path:
- "data/english/dev_en.tsv"
- split: dev_test
path:
- "data/english/dev_test_en.tsv"
- split: test
path:
- "data/english/test_en_unlabeled.tsv"
sep: "\t"
- config_name: german
data_files:
- split: train
path:
- "data/german/train_de.tsv"
- split: dev
path:
- "data/german/dev_de.tsv"
- split: dev_test
path:
- "data/german/dev_test_de.tsv"
- split: test
path:
- "data/german/test_de_unlabeled.tsv"
sep: "\t"
- config_name: greek
data_files:
- split: test
path:
- "data/greek/test_gr_unlabeled.tsv"
sep: "\t"
- config_name: italian
data_files:
- split: train
path:
- "data/italian/train_it.tsv"
- split: dev
path:
- "data/italian/dev_it.tsv"
- split: dev_test
path:
- "data/italian/dev_test_it.tsv"
- split: test
path:
- "data/italian/test_it_unlabeled.tsv"
sep: "\t"
- config_name: multilingual
data_files:
- split: dev_test
path:
- "data/multilingual/dev_test_multilingual.tsv"
- split: test
path:
- "data/multilingual/test_multilingual_unlabeled.tsv"
sep: "\t"
- config_name: polish
data_files:
- split: test
path:
- "data/polish/test_pol_unlabeled.tsv"
sep: "\t"
- config_name: romanian
data_files:
- split: test
path:
- "data/romanian/test_ro_unlabeled.tsv"
sep: "\t"
- config_name: ukrainian
data_files:
- split: test
path:
- "data/ukrainian/test_ukr_unlabeled.tsv"
sep: "\t"
---
# CLEF‑2025 CheckThat! Lab Task 1: Subjectivity in News Articles
Systems are challenged to distinguish whether a sentence from a news article expresses the subjective view of the author behind it or presents an objective view on the covered topic instead.
This is a binary classification tasks in which systems have to identify whether a text sequence (a sentence or a paragraph) is subjective (**SUBJ**) or objective (**OBJ**).
The task comprises three settings:
- **Monolingual**: train and test on data in a given language L
- **Multilingual**: train and test on data comprising several languages
- **Zero-shot**: train on several languages and test on unseen languages
## Datasets statistics
* **English**
- train: 830 sentences, 532 OBJ, 298 SUBJ
- dev: 462 sentences, 222 OBJ, 240 SUBJ
- dev-test: 484 sentences, 362 OBJ, 122 SUBJ
* **Italian**
- train: 1613 sentences, 1231 OBJ, 382 SUBJ
- dev: 667 sentences, 490 OBJ, 177 SUBJ
- dev-test - 513 sentences, 377 OBJ, 136 SUBJ
* **German**
- train: 800 sentences, 492 OBJ, 308 SUBJ
- dev: 491 sentences, 317 OBJ, 174 SUBJ
- dev-test - 337 sentences, 226 OBJ, 111 SUBJ
* **Bulgarian**
- train: 729 sentences, 406 OBJ, 323 SUBJ
- dev: 467 sentences, 175 OBJ, 139 SUBJ
- dev-test - 250 sentences, 143 OBJ, 107 SUBJ
- test: TBA
* **Arabic**
- train: 2,446 sentences, 1391 OBJ, 1055 SUBJ
- dev: 742 sentences, 266 OBJ, 201 SUBJ
- dev-test - 748 sentences, 425 OBJ, 323 SUBJ
## Input Data Format
The data will be provided as a TSV file with three columns:
> sentence_id <TAB> sentence <TAB> label
Where: <br>
* sentence_id: sentence id for a given sentence in a news article<br/>
* sentence: sentence's text <br/>
* label: *OBJ* and *SUBJ*
**Note:** For English, the training and development (validation) sets will also include a fourth column, "solved_conflict", whose boolean value reflects whether the annotators had a strong disagreement.
**Examples:**
> b9e1635a-72aa-467f-86d6-f56ef09f62c3 Gone are the days when they led the world in recession-busting SUBJ
>
> f99b5143-70d2-494a-a2f5-c68f10d09d0a The trend is expected to reverse as soon as next month. OBJ
## Output Data Format
The output must be a TSV format with two columns: sentence_id and label.
## Evaluation Metrics
This task is evaluated as a classification task using F1-macro measure. Other metrics include Precision, Recall, and F1 of the SUBJ class and the macro-averaged scores.
## Scorers
The code base with the scorer script is available on the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
To evaluate the output of your model which should be in the output format required, please run the script below:
> python evaluate.py -g dev_truth.tsv -p dev_predicted.tsv
where dev_predicted.tsv is the output of your model on the dev set, and dev_truth.tsv is the golden label file provided by authors.
The file can be used also to validate the format of the submission, simply use the provided test file as gold data.
## Baselines
The code base with the script to train the baseline model is provided in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
The script can be run as follow:
> python baseline.py -trp train_data.tsv -ttp dev_data.tsv
where train_data.tsv is the file to be used for training and dev_data.tsv is the file on which doing the prediction.
The baseline is a logistic regressor trained on a Sentence-BERT multilingual representation of the data.
## Leaderboard
The leaderboard is available in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
## Related Work
The dataset was used in [AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles](https://huggingface.co/papers/2507.11764).
Information regarding the annotation guidelines can be found in the following papers:
> Federico Ruggeri, Francesco Antici, Andrea Galassi, aikaterini Korre, Arianna Muti, Alberto Barron, _[On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection](https://ceur-ws.org/Vol-3370/paper10.pdf)_, in: Proceedings of Text2Story — Sixth Workshop on Narrative Extraction From Texts, CEUR-WS.org, 2023, Vol 3370, pp. 103 - 111
> Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre, Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barrón-Cedeño, _[A Corpus for Sentence-level Subjectivity Detection on English News Articles](https://arxiv.org/abs/2305.18034)_, in: Proceedings of Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC), 2024
> Suwaileh, Reem, Maram Hasanain, Fatema Hubail, Wajdi Zaghouani, and Firoj Alam. "ThatiAR: Subjectivity Detection in Arabic News Sentences." arXiv preprint arXiv:2406.05559 (2024).
>
## Credits
### ECIR 2025
Alam, F. et al. (2025). The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_68
```bibtex
@InProceedings{10.1007/978-3-031-88720-8_68,
author="Alam, Firoj
and Stru{\ss}, Julia Maria
and Chakraborty, Tanmoy
and Dietze, Stefan
and Hafid, Salim
and Korre, Katerina
and Muti, Arianna
and Nakov, Preslav
and Ruggeri, Federico
and Schellhammer, Sebastian
and Setty, Vinay
and Sundriyal, Megha
and Todorov, Konstantin
and V., Venktesh",
editor="Hauff, Claudia
and Macdonald, Craig
and Jannach, Dietmar
and Kazai, Gabriella
and Nardini, Franco Maria
and Pinelli, Fabio
and Silvestri, Fabrizio
and Tonellotto, Nicola",
title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval",
booktitle="Advances in Information Retrieval",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="467--478",
isbn="978-3-031-88720-8",
}
```
### CLEF 2025 LNCS
```bibtex
@InProceedings{clef-checkthat:2025-lncs,
author = {
Alam, Firoj
and Struß, Julia Maria
and Chakraborty, Tanmoy
and Dietze, Stefan
and Hafid, Salim
and Korre, Katerina
and Muti, Arianna
and Nakov, Preslav
and Ruggeri, Federico
and Schellhammer, Sebastian
and Setty, Vinay
and Sundriyal, Megha
and Todorov, Konstantin
and Venktesh, V
},
title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval},
editor = {
Carrillo-de-Albornoz, Jorge and
Gonzalo, Julio and
Plaza, Laura and
García Seco de Herrera, Alba and
Mothe, Josiane and
Piroi, Florina and
Rosso, Paolo and
Spina, Damiano and
Faggioli, Guglielmo and
Ferro, Nicola
},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)},
year = {2025}
}
```
### CLEF 2025 CEUR papers
```bibtex
@proceedings{clef2025-workingnotes,
editor = "Faggioli, Guglielmo and
Ferro, Nicola and
Rosso, Paolo and
Spina, Damiano",
title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
series = "CLEF~2025",
address = "Madrid, Spain",
year = 2025
}
```
### Task 1 overview paper
```bibtex
@inproceedings{clef-checkthat:2025:task1,
title = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article},
author = {
Ruggeri, Federico and
Muti, Arianna and
Korre, Katerina and
Stru{\ss}, Julia Maria and
Siegel, Melanie and
Wiegand, Michael and
Alam, Firoj and
Biswas, Rafiul and
Zaghouani, Wajdi and
Nawrocka, Maria and
Ivasiuk, Bogdan and
Razvan, Gogu and
Mihail, Andreiana
},
crossref = {clef2025-workingnotes}
}
``` |