CLEF 2024 - CheckThat! Lab

Home

Editions

Tasks

Contents

Checkworthiness, Subjectivity, Persuasion, Roles, Authorities and Adversarial Robustness

Tasks

  • Task 1 Check-worthiness estimation
  • Task 2 Subjectivity
  • Task 3 Persuasion Techniques
  • Task 4 Detecting hero, villain, and victim from memes
  • Task 5 Rumor Verification using Evidence from Authorities
  • Task 6 Robustness of Credibility Assessment with Adversarial Examples

Registration

Please register to participate in the CheckThat! Lab tasks.

Important Dates

  • November 2023: Lab registration opens
  • January 2024: Release of the training materials
  • 22 April 2024: Lab registration closes
  • 29 April 2024: Beginning of the evaluation cycle (test sets release)
  • 6 May 2024 (23:59 AOE): End of the evaluation cycle (run submission)
  • 31 May 2024: Deadline for the submission of working notes
  • 10 June 2024: Submission of Condensed Lab Overviews [LNCS]
  • 21 June 2024: Camera Ready Copy of Condensed Lab Overviews [LNCS] due
  • 24 June 2024: Notification of acceptance of working notes
  • 8 July 2024: Deadline for submission of camera-ready working notes
  • 22-26 July 2024: Preview of working notes
  • 9-12 September 2024: CLEF 2024 Conference in Grenoble, France

Recent Updates

  • 30 April, 2024: The test sets have been released. We are entering the test stage!
  • January, 2024: Training datasets released for all tasks.
  • 21st Sep, 2023: The 2023 cycle of the lab has come to an end. We now launch the 2024 cycle

Organisers

PC chairs

  • Julia Maria Struß, University of Applied Sciences Potsdam, Germany
  • Fatima Haouari, Qatar University, Qatar
  • Tamer Elsayed, Qatar University, Qatar

Communication chair

  • Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar

Citations

There are various papers associated with this edition of the lab. So far, the ECIR 2024 is the only one published. Details for the papers specific to each task as well as an overall overview will be posted here as they come out.

Bib entries for each paper are included below. For your convenience, the bib file (currently under construction) is available as well.

ECIR 2024

Barrón-Cedeño, A. et al. (2024). The CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14612. Springer, Cham. https://doi.org/10.1007/978-3-031-56069-9_62

@InProceedings{10.1007/978-3-031-56069-9_62,
    author="Barr{\'o}n-Cede{\~{n}}o, Alberto
        and Alam, Firoj
        and Chakraborty, Tanmoy
        and Elsayed, Tamer
        and Nakov, Preslav
        and Przyby{\l}a, Piotr
        and Stru{\ss}, Julia Maria
        and Haouari, Fatima
        and Hasanain, Maram
        and Ruggeri, Federico
        and Song, Xingyi
        and Suwaileh, Reem",
    editor="Goharian, Nazli
        and Tonellotto, Nicola
        and He, Yulan
        and Lipani, Aldo
        and McDonald, Graham
        and Macdonald, Craig
        and Ounis, Iadh",
    title="The CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity, 
Persuasion, Roles, Authorities, and Adversarial Robustness",
    booktitle="Advances in Information Retrieval",
    year="2024",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="449--458",
    abstract="The first five editions of the CheckThat! lab focused on the main 
tasks of the information verification pipeline: check-worthiness, evidence 
retrieval and pairing, and verification. Since the 2023 edition, it has been 
focusing on new problems that can support the research and decision making 
during the verification process. In this new edition, we focus on new problems 
and ---for the first time--- we propose six tasks in fifteen languages (Arabic, 
Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, 
Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 
estimation of check-worthiness (the only task that has been present in all 
CheckThat! editions), Task 2 identification of subjectivity (a follow up of 
CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of 
SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a 
follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from 
Authorities (a first), and Task 6 robustness of credibility assessment with 
adversarial examples (a first). These tasks represent challenging classification 
and retrieval problems at the document and at the span level, including 
multilingual and multimodal settings.",
    isbn="978-3-031-56069-9"
}

CLEF 2024 LNCS Overview paper

TBA

CLEF 2024 Task 1 overview paper

TBA

CLEF 2024 Task 2 overview paper

TBA

CLEF 2024 Task 3 overview paper

TBA

CLEF 2024 Task 4 overview paper

TBA

CLEF 2024 Task 5 overview paper

TBA

CLEF 2024 Task 6 overview paper

TBA

CLEF 2024

This edition of the CheckThat! lab is held within CLEF 2024, part of the CLEF initiative.