Supporting verification of news articles with automated search for
semantically similar articles
- URL: http://arxiv.org/abs/2103.15581v1
- Date: Mon, 29 Mar 2021 12:56:59 GMT
- Title: Supporting verification of news articles with automated search for
semantically similar articles
- Authors: Vishwani Gupta and Katharina Beckh and Sven Giesselbach and Dennis
Wegener and Tim Wirtz
- Abstract summary: We propose an evidence retrieval approach to handle fake news.
The learning task is formulated as an unsupervised machine learning problem.
We find that our approach is agnostic to concept drifts, i.e. the machine learning task is independent of the hypotheses in a text.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fake information poses one of the major threats for society in the 21st
century. Identifying misinformation has become a key challenge due to the
amount of fake news that is published daily. Yet, no approach is established
that addresses the dynamics and versatility of fake news editorials. Instead of
classifying content, we propose an evidence retrieval approach to handle fake
news. The learning task is formulated as an unsupervised machine learning
problem. For validation purpose, we provide the user with a set of news
articles from reliable news sources supporting the hypothesis of the news
article in query and the final decision is left to the user. Technically we
propose a two-step process: (i) Aggregation-step: With information extracted
from the given text we query for similar content from reliable news sources.
(ii) Refining-step: We narrow the supporting evidence down by measuring the
semantic distance of the text with the collection from step (i). The distance
is calculated based on Word2Vec and the Word Mover's Distance. In our
experiments, only content that is below a certain distance threshold is
considered as supporting evidence. We find that our approach is agnostic to
concept drifts, i.e. the machine learning task is independent of the hypotheses
in a text. This makes it highly adaptable in times where fake news is as
diverse as classical news is. Our pipeline offers the possibility for further
analysis in the future, such as investigating bias and differences in news
reporting.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming
Sentences with Contextualized Social Wisdom [13.92421433941043]
We investigate a novel task in the field of fake news debunking, which involves detecting sentence-level misinformation.
Inspired by the Multiple Instance Learning (MIL) approach, we propose a model called Weakly Supervised Detection of Misinforming Sentences (WSDMS)
We evaluate WSDMS on three real-world benchmarks and demonstrate that it outperforms existing state-of-the-art baselines in debunking fake news at both the sentence and article levels.
arXiv Detail & Related papers (2023-10-25T12:06:55Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection [15.386007761649251]
We propose a novel Title-Text similarity and emotion-aware Fake news detection (TieFake) method by jointly modeling the multi-modal context information and the author sentiment.
Specifically, we employ BERT and ResNeSt to learn the representations for text and images, and utilize publisher emotion extractor to capture the author's subjective emotion in the news content.
arXiv Detail & Related papers (2023-04-19T04:47:36Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.