CLEF 2025 - CheckThat! Lab

Home

Editions

Tasks

Contents

Subjectivity, Fact-Checking, Claim Extraction & Normalization, and Retrieval

Lab programme

The poster session will take place between CheckThat!’s Oral Sessions 1 and 2. Oral presentations are allocated 12 minutes, followed by 3 minutes for Q&A.

Plenary Talk Overview — Thursday 11th September

Time Speaker Title
11:30 – 13:15 Firoj Alam CheckThat! Lab Overview

Oral Session 1 — Thursday 11th September, 14:15–15:45 (Chair: Julia Maria Struß)

Time Task Title
14:15 – 15:00 CheckThat! Lab Overview
15:00 – 15:15 1 CEA-LIST at CheckThat! 2025: Evaluating LLMs as Detectors of Bias and Opinion in Text
Akram Elbouanani, Evan Dufraisse, Aboubacar Tuo and Adrian Popescu
15:15 – 15:30 1 XplaiNLP at CheckThat! 2025: Multilingual Subjectivity Detection with Finetuned Transformers and Prompt-Based Inference with Large Language Models
Ariana Sahitaj, Jiaao Li, Pia Wenzel Neves, Fedor Splitt, Premtim Sahitaj, Charlott Jakob, Veronika Solopova and Vera Schmitt
15:30 – 15:45 2 Factiverse and IAI at CheckThat! 2025: Adaptive ICL for Claim Extraction
Pratuat Amatya, Vinay Setty

Poster Session — Thursday 11th September, 15:45–16:30

Task Title
4 ClimateSense at CheckThat! 2025: Combining Fine-tuned Large Language Models and Conventional Machine Learning Models for Subjectivity and Scientific Web Discourse
Grégoire Burel, Pasquale Lisena, Enrico Daga, Raphaël Troncy and Harith Alani
3 Fraunhofer SIT at CheckThat! 2025: Multi-Instance Evidence Pooling for Numerical Claim Verification
André Runewicz, Paul Moritz Ranly, Inna Vogel and Martin Steinebach
4 Deep Retrieval at CheckThat! 2025: Identifying Scientific Papers from Implicit Social Media Mentions via Hybrid Retrieval and Re-Ranking
Pascal J. Sager, Ashwini Kamaraj, Benjamin F. Grewe and Thilo Stadelmann
1 JU_NLP at CheckThat! 2025: A Confidence-guided Transformer-based Approach for Multilingual Subjectivity Classification
Srijani Debnath and Dipankar Das
2 UNH at Check That! 2025 Task 2: Fine-tuning Vs Prompting
Joe Wilder, Nikhil Kadapala, Yanji Xu, Mohammed Alsaadi, Mitchell Rogers, Palash Agrawal, Adam Hassick and Laura Dietz
2 TIFIN at CheckThat! 2025: X-VERIFY — Multi-lingual NLI-based Fact Checking with Condensed Evidence
Manan Sharma, Arya Suneesh, Manish Jain, Pawan K. Rajpoot, Prasanna Devadiga, Bharatdeep Hazarika, Ashish Shrivastva, Kishan Gurumurthy, Anshuman B Suresh and Aditya U Baliga
1 AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles
Matteo Fasulo, Luca Babboni and Luca Tedeschini
4 JU_NLP at CheckThat! 2025: Leveraging Hybrid Embeddings for Multi-Label Classification in Scientific Social Media Discourse
Srijani Debnath and Dipankar Das
3 SINAI-UGPLN at CheckThat! 2025: Meta-Ensemble Strategies for Numerical Claim Verification in English
Mariuxi del Carmen Toapanta-Bernabé, Miguel Ángel Garcia-Cumbreras, L. Alfonso Ureña-López, Denisse Desiree Mora-Intriago and Carla Tatiana Bernal-García
1 cepanca_UNAM at CheckThat! 2025: A Language-driven BERT Approach for Detection of Subjectivity in News
Ivan Diaz, Jessica Barco, Joana Hernández, Edgar Lee-Romero and Gemma Bel-Enguix
4 Bridging social media, scientific discourse, and scientific literature
Parth Manish Thapliyal, Ritesh Sunil Chavan, Samridh Samridh, Chaoyuan Zuo and Ritwik Banerjee
4 DS@GT at CheckThat! 2025: Ensemble Methods for Detection of Scientific Discourse on Social Media
Ayush Parikh, Hoang Thanh Thanh Truong, Jeanette Schofield and Maximilian Heil
4 DS@GT at CheckThat! 2025: Exploring Retrieval and Reranking Pipelines for Scientific Claim Source Retrieval on Social Media Discourse
Jeanette Schofield, Shuyu Tian, Hoang Thanh Thanh Truong and Maximilian Heil
4 ATOM at CheckThat! 2025: Retrieve the Implicit — Scientific Evidence Retrieval
Moritz Staudinger, Alaa El-Ebshihy, Wojciech Kusa, Florina Piroi and Allan Hanbury
1 Arcturus at CheckThat! 2025: DeBERTa-v3-base for Multilingual Subjectivity Detection in News Articles
Aditya Aditya, Rahul Jambulkar and Sukomal Pal
2 DS@GT at CheckThat! 2025: A Simple Retrieval-First, LLM-Backed Framework for Claim Normalization
Aleksandar Pramov, Jiangqin Ma and Bina Patel
1 DS@GT at CheckThat! 2025: Detecting Subjectivity via Transfer-Learning and Corrective Data Augmentation
Maximilian Heil and Dionne Bang

Oral Session 2 — Thursday 11th September, 16:30–18:00 (Chair: Konstantin Todorov)

Time Task Title
16:30 – 16:45 2 dfkinit2b at CheckThat! 2025: Leveraging LLMs and Ensemble of Methods for Multilingual Claim Normalization
Tatiana Anikina, Ivan Vykopal, Sebastian Kula, Ravi Kiran Chikkala, Natalia Skachkova, Jing Yang, Veronika Solopova, Vera Schmitt and Simon Ostermann
16:45 – 17:00 3 DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification
Maximilian Heil and Aleksander Pramov
17:00 – 17:15 3 SINAI-UGPLN at CheckThat! 2025: Meta-Ensemble Strategies for Numerical Claim Verification in English
Mariuxi del Carmen, Miguel Ángel, L. Alfonso, Denisse Desiree, Carla Tatiana
17:15 – 17:30 3 LIS at CheckThat! 2025: Multi-Stage Open-Source Large Language Models for Fact-Checking Numerical Claims
Quy Thanh Le, Ismail Badache, Aznam Yacoub and Maamar El Amine Hamri
17:30 – 17:45 4 TurQUaz at CheckThat! 2025: Debating Large Language Models for Scientific Web Discourse Detection
Tarık Saraç, Selin Mergen and Mücahid Kutlu
17:45 – 18:00 4 Claim2Source at CheckThat! 2025: Zero-Shot Style Transfer for Scientific Claim-Source Retrieval
Tobias Schreieder and Michael Färber

Oral Session 3 — Friday 12th September, 11:30–13:00 (Chair: Firoj Alam)

Time Title
11:30 - 12.15 Invited talk: Automated detection of disinformation campaigns targeting brands. Turning automated fact-checking into a profitable business by Rubén Míguez Pérez
12.15 - 12.25 What’s next on CT?
12.25 - 12.40 Open Discussion

Invited Talk

Speaker: Rubén Míguez

Rubén Míguez headshot

Rubén Míguez holds a PhD in Telecommunications Engineering from the University of Vigo and an MBA from the School of Industrial Organization. He started his career in research at the University of Vigo, specializing in intelligent systems. In 2018, he joined Newtral as a product leader and head of technology, where he focused on leveraging AI to combat misinformation. He is also the founder of a tech startup and has received several accolades, including the National University Entrepreneur Award and the Antonio Palacios Award for Innovation. Rubén has served as a mentor on platforms such as Startup Pirates and has presented at various national and international events. Rubén Míguez is the CTO and co-founder of TrueFlag.ai, a Newtral spin-off dedicated to applying automated fact-checking for brand protection. TrueFlag is the first multilingual and multimodal SaaS platform designed to detect and prevent disinformation campaigns in real-time.

Title: Automated detection of disinformation campaigns targeting brands. Turning automated fact-checking into a profitable business
Abstract:

In this talk, Rubén Míguez will share his work at Trueflag.ai, a cutting-edge startup focused on leveraging automated fact-checking technologies to detect targeted disinformation campaigns on social media directed at major brands. He will discuss how the original technology developed for fact-checkers at Newtral has evolved into a new AI framework—combining open-source and proprietary models—designed to identify harmful narratives affecting brands across industries such as energy and banking. The presentation will outline the key challenges that lie ahead, highlight the solutions already deployed, and trace the transition from a monolithic, BERT-based architecture to a multi-agent framework for disinformation detection. In addition, we’ll explore the core business use cases and scientific objectives driving the development of this technology, offering the audience a broader perspective on what it takes to transform automated fact-checking into a viable and impactful business model—while addressing the real-world needs of fact-checkers and researchers alike.

Tasks

  • Task 1 Subjectivity
  • Task 2 Claims Extraction & Normalization
  • Task 3 Fact-Checking Numerical Claims
  • Task 4 Scientific Web Discourse

Registration

Please register to participate in the CheckThat! Lab tasks.

Important Dates

  • November 2024: Lab registration opens
  • December 2024: Release of the training materials
  • 25 April 2025: Lab registration closes
  • 2 May 2025: Beginning of the evaluation cycle (test sets release)
  • 30 April 2025: Beginning of the evaluation cycle (test sets release)
  • 10 May 2025 (23:59 AOE): End of the evaluation cycle (run submission)
  • 30 May 2025 MIDNIGHT CEST (11.59PM CEST): Deadline for the submission of working notes [CEUR-WS]
  • 30 May – 27 June 2025: Review process of participant papers
  • 9 June 2025: Submission of Condensed Lab Overviews [LNCS]
  • 16 June 2025: Notification of Acceptance for Condensed Lab Overviews [LNCS]
  • 23 June 2025: Camera Ready Copy of Condensed Lab Overviews [LNCS] due
  • 27 June 2025: Notification of Acceptance for Participant Papers [CEUR-WS]
  • 7 July 2025: Camera Ready Copy of Participant Papers and Extended Lab Overviews [CEUR-WS] due
  • 21-25 July 2025: CEUR-WS Working Notes Preview for Checking by Authors and Lab Organizers
  • 9-12 September 2025: CLEF 2025 Conference in Madrid, Spain

Recent Updates

  • 28 Aug, 2025: Updated programme.
  • 29 May, 2025: Gold labels for all test datasets are available on git repo.
  • 15 May, 2025: Leaderboard made public.
  • 2 May, 2025: Submission URLs are available. Please check task specific page.
  • 25 Apr, 2025: An online session will be conducted on 2nd May from 10:00 to 11:00 AM (UTC+3) to provide a walkthrough of the CodaLab system, submission process, and working notes.
  • 24 Apr, 2025: Test data release for task 1.
  • 27 Mar, 2025: Training data release for task 4.
  • 4 Feb, 2025: Training data release for task 1.
  • 12 Jan, 2025: Training data release for task 3 (English).
  • 20 Jan, 2025: Training data release for task 3 (Spanish).
  • 20 Jan, 2025: Training data release for task 2.
  • 7 Sept, 2024: website is up.

Organisers

  • General:
  • Task 1: Federico Ruggeri, Università di Bologna, Italy
  • Task 2:
    • Preslav Nakov, MBZUAI, UAE
    • Tanmoy Chakraborty, Indian Institute of Technology Delhi, India
  • Task 3: Vinay Setty, University of Stavanger, Norway
  • Task 4: Stefan Dietze, GESIS - Leibniz Institute for the Social Sciences, Cologne, Germany

PC chairs

Communication chair

  • Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar

Citations

The bib entries will included below and in this bib file.

ECIR 2025

@InProceedings{10.1007/978-3-031-88720-8_68,
  author="Alam, Firoj
  and Stru{\ss}, Julia Maria
  and Chakraborty, Tanmoy
  and Dietze, Stefan
  and Hafid, Salim
  and Korre, Katerina
  and Muti, Arianna
  and Nakov, Preslav
  and Ruggeri, Federico
  and Schellhammer, Sebastian
  and Setty, Vinay
  and Sundriyal, Megha
  and Todorov, Konstantin
  and V., Venktesh",
editor="Hauff, Claudia
  and Macdonald, Craig
  and Jannach, Dietmar
  and Kazai, Gabriella
  and Nardini, Franco Maria
  and Pinelli, Fabio
  and Silvestri, Fabrizio
  and Tonellotto, Nicola",
title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval",
booktitle="Advances in Information Retrieval",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="467--478",
isbn="978-3-031-88720-8",
}

CLEF 2025 LNCS

@InProceedings{clef-checkthat:2025-lncs,
  author = {
    Alam, Firoj
    and Struß, Julia Maria      
    and Chakraborty, Tanmoy
    and Dietze, Stefan
    and Hafid, Salim
    and Korre, Katerina
    and Muti, Arianna
    and Nakov, Preslav
    and Ruggeri, Federico
    and Schellhammer, Sebastian
    and Setty, Vinay
    and Sundriyal, Megha
    and Todorov, Konstantin
    and Venktesh, V
  },
  title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval},
  editor = {
    Carrillo-de-Albornoz, Jorge and
    Gonzalo, Julio and
    Plaza, Laura and
    García Seco de Herrera, Alba and
    Mothe, Josiane and
    Piroi, Florina and
    Rosso, Paolo and
    Spina, Damiano and
    Faggioli, Guglielmo and
    Ferro, Nicola
  },
  booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)},
  year = {2025}
}

CLEF 2025 CEUR papers

Here you find the task-specific papers. They all cross-ref to include information about the volume:

@proceedings{clef2025-workingnotes,
    editor = "Faggioli, Guglielmo and
    Ferro, Nicola and
    Rosso, Paolo and
    Spina, Damiano",
    title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
    booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
    series = "CLEF~2025",
    address = "Madrid, Spain",
    year = 2025
}

Task 1 overview paper

@inproceedings{clef-checkthat:2025:task1,
  title     = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article},
  author    = {
    Ruggeri, Federico and
    Muti, Arianna and
    Korre, Katerina and
    Stru{\ss}, Julia Maria and
    Siegel, Melanie and
    Wiegand, Michael and
    Alam, Firoj and
    Biswas, Rafiul and
    Zaghouani, Wajdi and
    Nawrocka, Maria and
    Ivasiuk, Bogdan and
    Razvan, Gogu and
    Mihail, Andreiana
  },
  crossref  = {clef2025-workingnotes}
}

Task 2 overview paper

@inproceedings{clef-checkthat:2025:task2,
  title     = {Overview of the {CLEF-2025 CheckThat!} Lab Task 2 on Claim Normalization},
  author    = {
    Sundriyal, Megha and
    Chakraborty, Tanmoy and
    Nakov, Preslav
  },
  crossref  = {clef2025-workingnotes}
}

Task 3 overview paper

@inproceedings{clef-checkthat:2025:task3,
  title     = {Overview of the {CLEF-2025 CheckThat!} Lab Task 3 on Fact-Checking Numerical Claims},
  author    = {
    Venktesh, V. and
    Setty, Vinay and
    Anand, Avishek and
    Hasanain, Maram and
    Bendou, Boushra and
    Bouamor, Houda and
    Alam, Firoj and
    Iturra-Bocaz, Gabriel and
    Galuščáková, Petra
  },
  crossref  = {clef2025-workingnotes}
}

Task 4 overview paper

@inproceedings{clef-checkthat:2025:task4,
  title     = {Overview of the {CLEF-2025 CheckThat!} Lab Task 4 on Scientific Web Discourse},
  author    = {
    Hafid, Salim and
    Kartal, Yavuz Selim and
    Schellhammer, Sebastian and
    Boland, Katarina and
    Dimitrov, Dimitar and
    Bringay, Sandra and
    Todorov, Konstantin and
    Dietze, Stefan
  },
  crossref  = {clef2025-workingnotes}
}

CLEF 2025

This edition of the CheckThat! lab is held within CLEF 2025, part of the CLEF initiative.