Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar
Citations
There are various papers associated with this edition of the lab. So far, the
ECIR 2024 is the only one published. Details for the papers specific to each task
as well as an overall overview will be posted here as they come out.
Bib entries for each paper are included below. For your convenience, the
bib file
(currently under construction) is available as well.
ECIR 2024
Barrón-Cedeño, A. et al. (2024). The CLEF-2024 CheckThat! Lab: Check-Worthiness,
Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness. In:
Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes
in Computer Science, vol 14612. Springer, Cham.
https://doi.org/10.1007/978-3-031-56069-9_62
@InProceedings{10.1007/978-3-031-56069-9_62,
author="Barr{\'o}n-Cede{\~{n}}o, Alberto
and Alam, Firoj
and Chakraborty, Tanmoy
and Elsayed, Tamer
and Nakov, Preslav
and Przyby{\l}a, Piotr
and Stru{\ss}, Julia Maria
and Haouari, Fatima
and Hasanain, Maram
and Ruggeri, Federico
and Song, Xingyi
and Suwaileh, Reem",
editor="Goharian, Nazli
and Tonellotto, Nicola
and He, Yulan
and Lipani, Aldo
and McDonald, Graham
and Macdonald, Craig
and Ounis, Iadh",
title="The CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity,
Persuasion, Roles, Authorities, and Adversarial Robustness",
booktitle="Advances in Information Retrieval",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="449--458",
abstract="The first five editions of the CheckThat! lab focused on the main
tasks of the information verification pipeline: check-worthiness, evidence
retrieval and pairing, and verification. Since the 2023 edition, it has been
focusing on new problems that can support the research and decision making
during the verification process. In this new edition, we focus on new problems
and ---for the first time--- we propose six tasks in fifteen languages (Arabic,
Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish,
Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1
estimation of check-worthiness (the only task that has been present in all
CheckThat! editions), Task 2 identification of subjectivity (a follow up of
CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of
SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a
follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from
Authorities (a first), and Task 6 robustness of credibility assessment with
adversarial examples (a first). These tasks represent challenging classification
and retrieval problems at the document and at the span level, including
multilingual and multimodal settings.",
isbn="978-3-031-56069-9"
}