CheckThat! at CLEF 2020: Enabling the Automatic Identification and
Verification of Claims in Social Media
- URL: http://arxiv.org/abs/2001.08546v1
- Date: Tue, 21 Jan 2020 06:47:11 GMT
- Title: CheckThat! at CLEF 2020: Enabling the Automatic Identification and
Verification of Claims in Social Media
- Authors: Alberto Barron-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San
Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari
- Abstract summary: CheckThat! proposes four complementary tasks and a related task from previous lab editions.
The evaluation is carried out using mean average precision or precision at rank k for ranking tasks, and F1 for classification tasks.
- Score: 28.070608555714752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe the third edition of the CheckThat! Lab, which is part of the
2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four
complementary tasks and a related task from previous lab editions, offered in
English, Arabic, and Spanish. Task 1 asks to predict which tweets in a Twitter
stream are worth fact-checking. Task 2 asks to determine whether a claim posted
in a tweet can be verified using a set of previously fact-checked claims. Task
3 asks to retrieve text snippets from a given set of Web pages that would be
useful for verifying a target tweet's claim. Task 4 asks to predict the
veracity of a target tweet's claim using a set of Web pages and potentially
useful snippets in them. Finally, the lab offers a fifth task that asks to
predict the check-worthiness of the claims made in English political debates
and speeches. CheckThat! features a full evaluation framework. The evaluation
is carried out using mean average precision or precision at rank k for ranking
tasks, and F1 for classification tasks.
Related papers
- Findings of the WMT 2022 Shared Task on Translation Suggestion [63.457874930232926]
We report the result of the first edition of the WMT shared task on Translation Suggestion.
The task aims to provide alternatives for specific words or phrases given the entire documents generated by machine translation (MT)
It consists two sub-tasks, namely, the naive translation suggestion and translation suggestion with hints.
arXiv Detail & Related papers (2022-11-30T03:48:36Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - Overview of the CLEF-2019 CheckThat!: Automatic Identification and
Verification of Claims [26.96108180116284]
CheckThat! lab featured two tasks in two different languages: English and Arabic.
The most successful approaches to Task 1 used various neural networks and logistic regression.
Learning-to-rank was used by the highest scoring runs for subtask A.
arXiv Detail & Related papers (2021-09-25T16:08:09Z) - Zero-Shot Information Extraction as a Unified Text-to-Triple Translation [56.01830747416606]
We cast a suite of information extraction tasks into a text-to-triple translation framework.
We formalize the task as a translation between task-specific input text and output triples.
We study the zero-shot performance of this framework on open information extraction.
arXiv Detail & Related papers (2021-09-23T06:54:19Z) - Overview of the CLEF--2021 CheckThat! Lab on Detecting Check-Worthy
Claims, Previously Fact-Checked Claims, and Fake News [21.574997165145486]
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF)
The lab evaluates technology supporting tasks related to factuality, and covers Arabic, Bulgarian, English, Spanish, and Turkish.
arXiv Detail & Related papers (2021-09-23T06:10:36Z) - Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of
claims using transformer-based models [0.0]
We introduce the strategies used by the Accenture Team for the CLEF 2020 CheckThat! Lab, Task 1, on English and Arabic.
This shared task evaluated whether a claim in social media text should be professionally fact checked.
We utilized BERT and RoBERTa models to identify claims in social media text a professional fact-checker should review.
arXiv Detail & Related papers (2020-09-05T01:44:11Z) - Overview of CheckThat! 2020: Automatic Identification and Verification
of Claims in Social Media [26.60148306714383]
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020.
The lab featured five tasks in two different languages: English and Arabic.
We describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants.
arXiv Detail & Related papers (2020-07-15T21:19:32Z) - Stance Prediction and Claim Verification: An Arabic Perspective [0.0]
This work explores the application of textual entailment in news claim verification and stance prediction using a new corpus in Arabic.
The publicly available corpus comes in two perspectives: a version consisting of 4,547 true and false claims and a version consisting of 3,786 pairs (claim, evidence)
arXiv Detail & Related papers (2020-05-21T01:17:46Z) - Adversarial Transfer Learning for Punctuation Restoration [58.2201356693101]
Adversarial multi-task learning is introduced to learn task invariant knowledge for punctuation prediction.
Experiments are conducted on IWSLT2011 datasets.
arXiv Detail & Related papers (2020-04-01T06:19:56Z) - Claim Check-Worthiness Detection as Positive Unlabelled Learning [53.24606510691877]
Claim check-worthiness detection is a critical component of fact checking systems.
We illuminate a central challenge in claim check-worthiness detection underlying all of these tasks.
Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning.
arXiv Detail & Related papers (2020-03-05T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.