Overview of the CLAIMSCAN-2023: Uncovering Truth in Social Media through
Claim Detection and Identification of Claim Spans
- URL: http://arxiv.org/abs/2310.19267v1
- Date: Mon, 30 Oct 2023 04:57:41 GMT
- Title: Overview of the CLAIMSCAN-2023: Uncovering Truth in Social Media through
Claim Detection and Identification of Claim Spans
- Authors: Megha Sundriyal and Md Shad Akhtar and Tanmoy Chakraborty
- Abstract summary: Social media platforms have become a haven for those who disseminate false information, propaganda, and fake news.
It has become crucial to automatically identify social media posts that make such claims, check their veracity, and differentiate between credible and false claims.
The primary objectives centered on two crucial tasks: Task A, determining whether a social media post constitutes a claim, and Task B, precisely identifying the words or phrases within the post that form the claim.
- Score: 36.21314290592325
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A significant increase in content creation and information exchange has been
made possible by the quick development of online social media platforms, which
has been very advantageous. However, these platforms have also become a haven
for those who disseminate false information, propaganda, and fake news. Claims
are essential in forming our perceptions of the world, but sadly, they are
frequently used to trick people by those who spread false information. To
address this problem, social media giants employ content moderators to filter
out fake news from the actual world. However, the sheer volume of information
makes it difficult to identify fake news effectively. Therefore, it has become
crucial to automatically identify social media posts that make such claims,
check their veracity, and differentiate between credible and false claims. In
response, we presented CLAIMSCAN in the 2023 Forum for Information Retrieval
Evaluation (FIRE'2023). The primary objectives centered on two crucial tasks:
Task A, determining whether a social media post constitutes a claim, and Task
B, precisely identifying the words or phrases within the post that form the
claim. Task A received 40 registrations, demonstrating a strong interest and
engagement in this timely challenge. Meanwhile, Task B attracted participation
from 28 teams, highlighting its significance in the digital era of
misinformation.
Related papers
- WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming
Sentences with Contextualized Social Wisdom [13.92421433941043]
We investigate a novel task in the field of fake news debunking, which involves detecting sentence-level misinformation.
Inspired by the Multiple Instance Learning (MIL) approach, we propose a model called Weakly Supervised Detection of Misinforming Sentences (WSDMS)
We evaluate WSDMS on three real-world benchmarks and demonstrate that it outperforms existing state-of-the-art baselines in debunking fake news at both the sentence and article levels.
arXiv Detail & Related papers (2023-10-25T12:06:55Z) - Debunking Disinformation: Revolutionizing Truth with NLP in Fake News
Detection [7.732570307576947]
The Internet and social media have altered how individuals access news in the age of instantaneous information distribution.
Fake news is rapidly spreading on digital platforms, which has a negative impact on the media ecosystem.
Natural Language Processing has emerged as a potent weapon in the growing war against disinformation.
arXiv Detail & Related papers (2023-08-30T21:25:31Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - TIB's Visual Analytics Group at MediaEval '20: Detecting Fake News on
Corona Virus and 5G Conspiracy [9.66022279280394]
Fake news on social media has become a hot topic of research as it negatively impacts the discourse of real news in the public.
The FakeNews task at MediaEval 2020 tackles this problem by creating a challenge to automatically detect tweets containing misinformation.
We present a simple approach that uses BERT embeddings and a shallow neural network for classifying tweets using only text.
arXiv Detail & Related papers (2021-01-10T11:52:17Z) - The Role of the Crowd in Countering Misinformation: A Case Study of the
COVID-19 Infodemic [15.885290526721544]
We focus on tweets related to the COVID-19 pandemic, analyzing the spread of misinformation, professional fact checks, and the crowd response to popular misleading claims about COVID-19.
We train a classifier to create a novel dataset of 155,468 COVID-19-related tweets, containing 33,237 false claims and 33,413 refuting arguments.
We observe that the surge in misinformation tweets results in a quick response and a corresponding increase in tweets that refute such misinformation.
arXiv Detail & Related papers (2020-11-11T13:48:44Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.