The CLEF-2026 CheckThat! Lab: Advancing Multilingual Fact-Checking
- URL: http://arxiv.org/abs/2602.09516v1
- Date: Tue, 10 Feb 2026 08:20:18 GMT
- Title: The CLEF-2026 CheckThat! Lab: Advancing Multilingual Fact-Checking
- Authors: Julia Maria Struß, Sebastian Schellhammer, Stefan Dietze, Venktesh V, Vinay Setty, Tanmoy Chakraborty, Preslav Nakov, Avishek Anand, Primakov Chungkham, Salim Hafid, Dhruv Sahnan, Konstantin Todorov,
- Abstract summary: The CheckThat! lab aims to advance the development of innovative technologies combating disinformation and manipulation efforts in online communication.<n>In this year's edition, the verification pipeline is at the center again with the following tasks.<n>These tasks represent challenging classification and retrieval problems as well as generation challenges at the document and span level, including multilingual settings.
- Score: 58.93871662838964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The CheckThat! lab aims to advance the development of innovative technologies combating disinformation and manipulation efforts in online communication across a multitude of languages and platforms. While in early editions the focus has been on core tasks of the verification pipeline (check-worthiness, evidence retrieval, and verification), in the past three editions, the lab added additional tasks linked to the verification process. In this year's edition, the verification pipeline is at the center again with the following tasks: Task 1 on source retrieval for scientific web claims (a follow-up of the 2025 edition), Task 2 on fact-checking numerical and temporal claims, which adds a reasoning component to the 2025 edition, and Task 3, which expands the verification pipeline with generation of full-fact-checking articles. These tasks represent challenging classification and retrieval problems as well as generation challenges at the document and span level, including multilingual settings.
Related papers
- The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval [47.46368856874347]
CheckThat! lab aims to advance the development of technologies designed to identify and counteract online disinformation.<n>Since the 2023 edition, the lab has expanded its scope to address auxiliary tasks that support research and decision-making in verification.<n>In the 2025 edition, the lab revisits core verification tasks while also considering auxiliary challenges.
arXiv Detail & Related papers (2025-03-19T02:06:07Z) - GenAI Content Detection Task 2: AI vs. Human -- Academic Essay Authenticity Challenge [12.076440946525434]
The Academic Essay Authenticity Challenge was organized as part of the GenAI Content Detection shared tasks collocated with COLING 2025.<n>This challenge focuses on detecting machine-generated vs. human-authored essays for academic purposes.<n>The challenge involves two languages: English and Arabic.<n>This paper outlines the task formulation, details the dataset construction process, and explains the evaluation framework.
arXiv Detail & Related papers (2024-12-24T08:33:44Z) - Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges [7.140449861888235]
This review categorizes works in text generation into five main tasks.
For each task, we review their relevant characteristics, sub-tasks, and specific challenges.
Our investigation shows nine prominent challenges common to all tasks and sub-tasks in recent text generation publications.
arXiv Detail & Related papers (2024-05-24T14:38:11Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - Recent Advances in Direct Speech-to-text Translation [58.692782919570845]
We categorize the existing research work into three directions based on the main challenges -- modeling burden, data scarcity, and application issues.
For the challenge of data scarcity, recent work resorts to many sophisticated techniques, such as data augmentation, pre-training, knowledge distillation, and multilingual modeling.
We analyze and summarize the application issues, which include real-time, segmentation, named entity, gender bias, and code-switching.
arXiv Detail & Related papers (2023-06-20T16:14:27Z) - FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue [70.65782786401257]
This work explores conversational task transfer by introducing FETA: a benchmark for few-sample task transfer in open-domain dialogue.
FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer.
We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs.
arXiv Detail & Related papers (2022-05-12T17:59:00Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - Overview of CheckThat! 2020: Automatic Identification and Verification
of Claims in Social Media [26.60148306714383]
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020.
The lab featured five tasks in two different languages: English and Arabic.
We describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants.
arXiv Detail & Related papers (2020-07-15T21:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.