CheckThat! Lab at CLEF 2023





Task 1: Check-Worthiness in Multimodal and Unimodal Content


The aim of this task is to determine whether a claim in a tweet is worth fact-checking. Typical approaches to make that decision require to either resort to the judgments of professional fact-checkers or to human annotators to answer several auxiliary questions such as “does it contain a verifiable factual claim?”, and “is it harmful?”, before deciding on the final check-worthiness label.

This year we offer two kinds of data, which translate to two subtasks:

  • Subtask 1A (Multimodal): The tweets to be judged include both a text snippet and an image.
  • Subtask 1B (Unimodal - Text): The tweets to be judged contain only text.

Subtask 1A is offered in Arabic and English, whereas Subtask 1B is offered in Arabic, English and Spanish.


  • Subtask 1A (Multimodal): Arabic
  • Subtask 1A (Multimodal): English
  • Subtask 1B (Unimodal - Text): Arabic
  • Subtask 1B (Unimodal - Text): English TBA
  • Subtask 1B (Unimodal - Text): Spanish (including tweets and transcriptions)


This is a binary classification task. The official evaluation metric is F_1 over the positive class.


Scorers, Format Checkers, and Baseline Scripts

All scripts can be found on the main repository for the lab, CheckThat! Lab Task 1:

Submission guidelines



Will be available soon.


  • Firoj Alam, Qatar Computing Research Institute, HBKU
  • Alberto Barrón-Cedeño, Università di Bologna, Italy
  • Gullal S. Cheema, TIB – Leibniz Information Centre for Science and Technology
  • Sherzod Hakimov, University of Potsdam
  • Maram Hasanain, Qatar Computing Research Institute, HBKU
  • Chengkai Li, The University of Texas at Arlington
  • Rubén Miguez, Newtral, Spain
  • Hamdy Mubarak, Qatar Computing Research Institute, HBKU
  • Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence
  • Gautam Kishore Shahi, University of Duisburg-Essen
  • Wajdi Zaghouani, HBKU