Diverse Misinformation: Impacts of Human Biases on Detection of
Deepfakes on Networks
- URL: http://arxiv.org/abs/2210.10026v3
- Date: Sat, 13 Jan 2024 18:26:49 GMT
- Title: Diverse Misinformation: Impacts of Human Biases on Detection of
Deepfakes on Networks
- Authors: Juniper Lovato, Laurent H\'ebert-Dufresne, Jonathan St-Onge, Randall
Harp, Gabriela Salazar Lopez, Sean P. Rogers, Ijaz Ul Haq and Jeremiah
Onaolapo
- Abstract summary: We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation.
We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them.
Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other.
- Score: 1.5910150494847917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms often assume that users can self-correct against
misinformation. However, social media users are not equally susceptible to all
misinformation as their biases influence what types of misinformation might
thrive and who might be at risk. We call "diverse misinformation" the complex
relationships between human biases and demographics represented in
misinformation. To investigate how users' biases impact their susceptibility
and their ability to correct each other, we analyze classification of deepfakes
as a type of diverse misinformation. We chose deepfakes as a case study for
three reasons: 1) their classification as misinformation is more objective; 2)
we can control the demographics of the personas presented; 3) deepfakes are a
real-world concern with associated harms that must be better understood. Our
paper presents an observational survey (N=2,016) where participants are exposed
to videos and asked questions about their attributes, not knowing some might be
deepfakes. Our analysis investigates the extent to which different users are
duped and which perceived demographics of deepfake personas tend to mislead. We
find that accuracy varies by demographics, and participants are generally
better at classifying videos that match them. We extrapolate from these results
to understand the potential population-level impacts of these biases using a
mathematical model of the interplay between diverse misinformation and crowd
correction. Our model suggests that diverse contacts might provide "herd
correction" where friends can protect each other. Altogether, human biases and
the attributes of misinformation matter greatly, but having a diverse social
group may help reduce susceptibility to misinformation.
Related papers
- Correcting misinformation on social media with a large language model [14.69780455372507]
Real-world misinformation, often multimodal, can be misleading using diverse tactics like conflating correlation with causation.
Such misinformation is severely understudied, challenging to address, and harms various social domains, particularly on social media.
We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information.
arXiv Detail & Related papers (2024-03-17T10:59:09Z) - Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs [67.51906565969227]
We study the unintended side-effects of persona assignment on the ability of LLMs to perform basic reasoning tasks.
Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse personas (e.g. an Asian person) spanning 5 socio-demographic groups.
arXiv Detail & Related papers (2023-11-08T18:52:17Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Folk Models of Misinformation on Social Media [10.667165962654996]
We identify at least five folk models that conceptualize misinformation as either: political (counter)argumentation, out-of-context narratives, inherently fallacious information, external propaganda, or simply entertainment.
We use the rich conceptualizations embodied in these folk models to uncover how social media users minimize adverse reactions to misinformation encounters in their everyday lives.
arXiv Detail & Related papers (2022-07-26T00:40:26Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.