Overview of Factify5WQA: Fact Verification through 5W Question-Answering
- URL: http://arxiv.org/abs/2410.04236v1
- Date: Sat, 5 Oct 2024 17:28:18 GMT
- Title: Overview of Factify5WQA: Fact Verification through 5W Question-Answering
- Authors: Suryavardan Suresh, Anku Rani, Parth Patwa, Aishwarya Reganti, Vinija Jain, Aman Chadha, Amitava Das, Amit Sheth, Asif Ekbal,
- Abstract summary: The Factify5WQA task aims to increase research towards automated fake news detection.
The best performing team posted an accuracy of 69.56%, which is a near 35% improvement over the baseline.
- Score: 25.679348027485254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Researchers have found that fake news spreads much times faster than real news. This is a major problem, especially in today's world where social media is the key source of news for many among the younger population. Fact verification, thus, becomes an important task and many media sites contribute to the cause. Manual fact verification is a tedious task, given the volume of fake news online. The Factify5WQA shared task aims to increase research towards automated fake news detection by providing a dataset with an aspect-based question answering based fact verification method. Each claim and its supporting document is associated with 5W questions that help compare the two information sources. The objective performance measure in the task is done by comparing answers using BLEU score to measure the accuracy of the answers, followed by an accuracy measure of the classification. The task had submissions using custom training setup and pre-trained language-models among others. The best performing team posted an accuracy of 69.56%, which is a near 35% improvement over the baseline.
Related papers
- Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Findings of Factify 2: Multimodal Fake News Detection [36.34201719103715]
We present the outcome of the Factify 2 shared task, which provides a multi-modal fact verification and satire news dataset.
The data calls for a comparison based approach to the task by pairing social media claims with supporting documents, with both text and image, divided into 5 classes based on multi-modal relations.
The highest F1 score averaged for all five classes was 81.82%.
arXiv Detail & Related papers (2023-07-19T22:14:49Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question
Answering [3.0401523614135333]
A human fact-checker generally follows several logical steps to verify a verisimilitude claim.
It is necessary to have an aspect-based (delineating which part(s) are true and which are false) explainable system.
In this paper, we propose a 5W framework for question-answer-based fact explainability.
arXiv Detail & Related papers (2023-05-07T16:52:21Z) - Factify 2: A Multimodal Fake News and Satire News Dataset [36.34201719103715]
We provide a multi-modal fact-checking dataset called FACTIFY 2, improving Factify 1 by using new data sources and adding satire articles.
Similar to FACTIFY 1.0, we have three broad categories - support, no-evidence, and refute, with sub-categories based on the entailment of visual and textual data.
We also provide a BERT and Vison Transformer based baseline, which achieves 65% F1 score in the test set.
arXiv Detail & Related papers (2023-04-08T03:14:19Z) - Discord Questions: A Computational Approach To Diversity Analysis in
News Coverage [84.55145223950427]
We propose a new framework to assist readers in identifying source differences and gaining an understanding of news coverage diversity.
The framework is based on the generation of Discord Questions: questions with a diverse answer pool.
arXiv Detail & Related papers (2022-11-09T16:37:55Z) - Fake News Detection: Experiments and Approaches beyond Linguistic
Features [0.0]
Credibility information and metadata associated with the news article have been used for improved results.
The experiments also show how modelling justification or evidence can lead to improved results.
arXiv Detail & Related papers (2021-09-27T10:00:44Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Generating Fact Checking Briefs [97.82546239639964]
We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check.
We develop QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers.
We show that fact checking with briefs -- in particular QABriefs -- increases the accuracy of crowdworkers by 10% while slightly decreasing the time taken.
arXiv Detail & Related papers (2020-11-10T23:02:47Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.