FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News
- URL: http://arxiv.org/abs/2110.11729v2
- Date: Sat, 8 Oct 2022 09:51:04 GMT
- Title: FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News
- Authors: Giancarlo Ruffo, Alfonso Semeraro
- Abstract summary: This work highlights a series of pitfalls that can influence human annotators when building false news datasets.
It also challenges the common rationale of AI that suggest users to read the full article before re-sharing.
- Score: 0.2741266294612776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misinformation posting and spreading in Social Media is ignited by personal
decisions on the truthfulness of news that may cause wide and deep cascades at
a large scale in a fraction of minutes. When individuals are exposed to
information, they usually take a few seconds to decide if the content (or the
source) is reliable, and eventually to share it. Although the opportunity to
verify the rumour is often just one click away, many users fail to make a
correct evaluation. We studied this phenomenon with a web-based questionnaire
that was compiled by 7,298 different volunteers, where the participants were
asked to mark 20 news as true or false. Interestingly, false news is correctly
identified more frequently than true news, but showing the full article instead
of just the title, surprisingly, does not increase general accuracy. Also,
displaying the original source of the news may contribute to mislead the user
in some cases, while a genuine wisdom of the crowd can positively assist
individuals' ability to classify correctly. Finally, participants whose
browsing activity suggests a parallel fact-checking activity, show better
performance and declare themselves as young adults. This work highlights a
series of pitfalls that can influence human annotators when building false news
datasets, which in turn fuel the research on the automated fake news detection;
furthermore, these findings challenge the common rationale of AI that suggest
users to read the full article before re-sharing.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Who Shares Fake News? Uncovering Insights from Social Media Users' Post Histories [0.0]
We propose that social-media users' own post histories are an underused resource for studying fake-news sharing.
We identify cues that distinguish fake-news sharers, predict those most likely to share fake news, and identify promising constructs to build interventions.
arXiv Detail & Related papers (2022-03-20T14:26:20Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Profiling Fake News Spreaders on Social Media through Psychological and
Motivational Factors [26.942545715296983]
We study the characteristics and motivational factors of fake news spreaders on social media.
We then perform a series of experiments to determine if fake news spreaders can be found to exhibit different characteristics than other users.
arXiv Detail & Related papers (2021-08-24T20:27:38Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Early Detection of Fake News by Utilizing the Credibility of News,
Publishers, and Users Based on Weakly Supervised Learning [23.96230360460216]
We propose a novel Structure-aware Multi-head Attention Network (SMAN), which combines the news content, publishing, and reposting relations of publishers and users.
SMAN can detect fake news in 4 hours with an accuracy of over 91%, which is much faster than the state-of-the-art models.
arXiv Detail & Related papers (2020-12-08T05:53:33Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.