Human Brains Can't Detect Fake News: A Neuro-Cognitive Study of Textual
Disinformation Susceptibility
- URL: http://arxiv.org/abs/2207.08376v1
- Date: Mon, 18 Jul 2022 04:31:07 GMT
- Title: Human Brains Can't Detect Fake News: A Neuro-Cognitive Study of Textual
Disinformation Susceptibility
- Authors: Cagri Arisoy, Anuradha Mandal and Nitesh Saxena
- Abstract summary: "Fake news" is arguably one of the most significant threats on the Internet.
Fake news attacks hinge on whether Internet users perceive a fake news article/snippet to be legitimate after reading it.
We investigate the neural underpinnings relevant to fake/real news through EEG.
- Score: 2.131521514043068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spread of digital disinformation (aka "fake news") is arguably one of the
most significant threats on the Internet which can cause individual and
societal harm of large scales. The susceptibility to fake news attacks hinges
on whether Internet users perceive a fake news article/snippet to be legitimate
after reading it. In this paper, we attempt to garner an in-depth understanding
of users' susceptibility to text-centric fake news attacks via a
neuro-cognitive methodology. We investigate the neural underpinnings relevant
to fake/real news through EEG. We run an experiment with human users to pursue
a thorough investigation of users' perception and cognitive processing of
fake/real news. We analyze the neural activity associated with the fake/real
news detection task for different categories of news articles. Our results show
there may be no statistically significant or automatically inferable
differences in the way the human brain processes the fake vs. real news, while
marked differences are observed when people are subject to (real/fake) news vs.
resting state and even between some different categories of fake news. This
neuro-cognitive finding may help to justify users' susceptibility to fake news
attacks, as also confirmed from the behavioral analysis. In other words, the
fake news articles may seem almost indistinguishable from the real news
articles in both behavioral and neural domains. Our work serves to dissect the
fundamental neural phenomena underlying fake news attacks and explains users'
susceptibility to these attacks through the limits of human biology. We believe
this could be a notable insight for the researchers and practitioners
suggesting the human detection of fake news might be ineffective, which may
also have an adverse impact on the design of automated detection approaches
that crucially rely upon human labeling of text articles for building training
models
Related papers
- Which linguistic cues make people fall for fake news? A comparison of
cognitive and affective processing [21.881235152669564]
Linguistic cues (e.g. adverbs, personal pronouns, positive emotion words, negative emotion words) are important characteristics of any text.
We compare the role of linguistic cues across both cognitive processing (related to careful thinking) and affective processing (related to unconscious automatic evaluations)
We find that users engage more in cognitive processing for longer fake news articles, while affective processing is more pronounced for fake news written in analytic words.
arXiv Detail & Related papers (2023-12-02T11:06:14Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Fake News Detection and Behavioral Analysis: Case of COVID-19 [0.22940141855172028]
"Infodemic" due to spread of fake news regarding the pandemic has been a global issue.
Readers could mistake fake news for real news, and consequently have less access to authentic information.
It is challenging to accurately identify fake news data in social media posts.
arXiv Detail & Related papers (2023-05-25T13:42:08Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Combining Machine Learning with Knowledge Engineering to detect Fake
News in Social Networks-a survey [0.7120858995754653]
In the news media and social media the information is spread highspeed but without accuracy and hence detection mechanism should be able to predict news fast enough to tackle the dissemination of fake news.
In this paper we present what is fake news, importance of fake news, overall impact of fake news on different areas, different ways to detect fake news on social media, existing detections algorithms that can help us to overcome the issue.
arXiv Detail & Related papers (2022-01-20T07:43:15Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Profiling Fake News Spreaders on Social Media through Psychological and
Motivational Factors [26.942545715296983]
We study the characteristics and motivational factors of fake news spreaders on social media.
We then perform a series of experiments to determine if fake news spreaders can be found to exhibit different characteristics than other users.
arXiv Detail & Related papers (2021-08-24T20:27:38Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - How does Truth Evolve into Fake News? An Empirical Study of Fake News
Evolution [55.27685924751459]
We present the Fake News Evolution dataset: a new dataset tracking the fake news evolution process.
Our dataset is composed of 950 paired data, each of which consists of articles representing the truth, the fake news, and the evolved fake news.
We observe the features during the evolution and they are the disinformation techniques, text similarity, top 10 keywords, classification accuracy, parts of speech, and sentiment properties.
arXiv Detail & Related papers (2021-03-10T09:01:34Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.