CheckThat! Lab at CLEF 2024

Home

Editions

Tasks

Contents

Task 2: Subjectivity

Definition

Systems are challenged to distinguish whether a sentence from a news article expresses the subjective view of the author behind it or presents an objective view on the covered topic instead.

This is a binary classification tasks in which systems have to identify whether a text sequence (a sentence or a paragraph) is subjective or objective.

The task is offered in five languages: Arabic, Bulgarian, English, German and Italian, plus Multilingual.

Information regarding the annotation guidelines can be found in the following paper: On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection.

Datasets

Evaluation

The official evaluation is macro-averaged F1 between the two classes.

Submission

Scorers, Format Checkers, and Baseline Scripts

All scripts can be found on gitlab at CheckThat! Lab Task 2 repository

Submission guidelines

  • Make sure that you create one account for each team, and submit it through one account only.
  • The last file submitted to the leaderboard will be considered as the final submission.
  • For subtask 2A, there are 5 languages (Arabic, Bulgarian, English, German, and Italian). Moreover, we define a multi-lingual evaluation scenario where we use a balanced sub-sample of all 5 languages to define multi-lingual training and evaluation splits.
  • The name of each output file has to be subtask2A_[LANG}.tsv where LANG can be arabic, bulgarian, english, german, italian, or multilingual.
  • Get sure to set .tsv as the file extension; otherwise, you will get an error on the leaderboard.
  • Examples of submission file names should be subtask2A_arabic.tsv, subtask2A_bulgarian.tsv, subtask2A_english.tsv, subtask2A_german.tsv, subtask2A_italian.tsv, subtask2A_multilingual.tsv.
  • You have to zip the tsv into a file with the same name, e.g., subtask2A_arabic.zip, and submit it through the codalab page.
  • If you participate in the task for more than one language, for each language you must do a different submission.
  • It is required to submit the team name for each submission and fill out the questionnaire (link will be provided, once the evaluation cycle started started) to provide some details on your approach as we need that information for the overview paper. Your team name must EXACTLY match the one used during the CLEF registration.
  • You are allowed to submit max 200 submissions per day for each subtask.
  • We will keep the leaderboard private till the end of the submission period, hence, results will not be available upon submission. All results will be available after the evaluation period.

Submission Site

The submission is done through the Codalab platform at https://codalab.lisn.upsaclay.fr/competitions/18809

Leaderboard

Multilingual

Team Macro F1 SUBJ F1
- nullpointer 0.7121 0.69
1 Hybrinfox 0.6849 0.63
2 (baseline) 0.6697 0.66
3 IAI Group 0.6292 0.67

* Submissions without position were submitted after the deadline.

Arabic

Team Macro F1 SUBJ F1
1 IAI Group 0.4947 0.46
2 nullpointer 0.4908 0.37
3 (baseline) 0.4852 0.40
4 SemanticCuetSync 0.4804 0.31
5 ToniRodriguez 0.4645 0.27
6 Hybrinfox 0.4551 0.25
7 JUNLP 0.3623 0.00

Bulgarian

Team Macro F1 SUBJ F1
1 (baseline) 0.7531 0.73
2 nullpointer 0.7169 0.69
3 Hybrinfox 0.7147 0.65
4 IAI Group 0.5824 0.65
5 JUNLP 0.3639 0.00

English

Team Macro F1 SUBJ F1
1 Hybrinfox 0.7442 0.6
2 ToniRodriguez 0.7372 0.58
3 SSN-NLP 0.7120 0.54
4 Checker Hacker 0.7081 0.54
5 JK_PCIC_UNAM 0.7079 0.55
6 SINAI 0.7035 0.53
7 FactFinders 0.6955 0.51
8 Vigilantes 0.6955 0.52
8 eevvgg 0.6955 0.52
9 nullpointer 0.6893 0.54
10 Indigo 0.6388 0.47
11 (baseline) 0.6346 0.45
12 SemanticCuetSync 0.6265 0.43
13 JUNLP 0.5598 0.36
14 CLaC-2 0.4500 0.37
15 IAI Group 0.4491 0.39

German

Team Macro F1 SUBJ F1
1 nullpointer 0.7908 0.73
2 IAI Group 0.7302 0.66
3 (baseline) 0.6994 0.63
4 Hybrinfox 0.6968 0.57

Italian

Team Macro F1 SUBJ F1
1 JK_PCIC_UNAM 0.7917 0.69
2 Hybrinfox 0.7838 0.68
3 nullpointer 0.7430 0.64
4 (baseline) 0.6503 0.52
5 IAI Group 0.5862 0.49

Organizers

  • Julia Maria Struß, University of Applied Sciences Potsdam, Germany
  • Federico Ruggeri, Università di Bologna, Italy
  • Alberto Barrón-Cedeño, Università di Bologna, Italy

Arabic data

  • Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar
  • Reem Suwaileh, HBKU, Qatar
  • Maram Hasanain, Qatar Computing Research Institute, HBKU, Qatar
  • Fatema Ahmed, Qatar Computing Research Institute, HBKU, Qatar
  • Wajdi Zaghouani, HBKU, Qatar

Bulgarian data

  • Dimitar Dimitrov, Sofia University, Bulgaria
  • Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence, UAE
  • Ivan Koychev, Sofia University, Bulgaria
  • Georgi Pachov, Sofia University, Bulgaria
  • Dimitrina Zlatkova, Sofia University, Bulgaria

English data

  • Francesco Antici, Università di Bologna, Italy
  • Alessandra Bardi, Università di Bologna, Italy
  • Alice Fedotova, Università di Bologna, Italy
  • Katerina Korre, Università di Bologna, Italy
  • Arianna Muti, Università di Bologna, Italy
  • Luca Bolognini, Università di Bologna, Italy
  • Elena Palmieri, Università di Bologna, Italy
  • Giulia Grundler, Università di Bologna, Italy

German data

  • Julia Maria Struß, University of Applied Sciences Potsdam, Germany
  • Juliane Köhler, University of Applied Sciences Potsdam, Germany
  • Melanie Siegel, Darmstadt University of Applied Sciences, Germany
  • Michael Wiegand, University of Klagenfurt, Austria
  • Katja Ebermanns, University of Applied Sciences Potsdam, Germany

Italian data

  • Francesco Antici, Università di Bologna, Italy
  • Andrea Galassi, Università di Bologna, Italy
  • Alessandra Bardi, Università di Bologna, Italy
  • Alice Fedotova, Università di Bologna, Italy
  • Arianna Muti, Università di Bologna, Italy
  • Luca Bolognini, Università di Bologna, Italy
  • Elena Palmieri, Università di Bologna, Italy
  • Giulia Grundler, Università di Bologna, Italy

Note: The Italian training and validation dataset is partially derived from SubjectivITA corpus.

Contact

For queries, please join the Slack channel

Alternatively, please send an email to: clef-factcheck@googlegroups.com