Adapting Security Warnings to Counter Online Disinformation
- URL: http://arxiv.org/abs/2008.10772v6
- Date: Mon, 16 Aug 2021 19:01:42 GMT
- Title: Adapting Security Warnings to Counter Online Disinformation
- Authors: Ben Kaiser, Jerry Wei, Eli Lucherini, Kevin Lee, J. Nathan Matias,
Jonathan Mayer
- Abstract summary: We adapt methods and results from the information security warning literature to design effective disinformation warnings.
We found that users routinely ignore contextual warnings, but users notice interstitial warnings.
We found that a warning's design could effectively inform users or convey a risk of harm.
- Score: 6.592035021489205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Disinformation is proliferating on the internet, and platforms are responding
by attaching warnings to content. There is little evidence, however, that these
warnings help users identify or avoid disinformation. In this work, we adapt
methods and results from the information security warning literature in order
to design and evaluate effective disinformation warnings. In an initial
laboratory study, we used a simulated search task to examine contextual and
interstitial disinformation warning designs. We found that users routinely
ignore contextual warnings, but users notice interstitial warnings -- and
respond by seeking information from alternative sources. We then conducted a
follow-on crowdworker study with eight interstitial warning designs. We
confirmed a significant impact on user information-seeking behavior, and we
found that a warning's design could effectively inform users or convey a risk
of harm. We also found, however, that neither user comprehension nor fear of
harm moderated behavioral effects. Our work provides evidence that
disinformation warnings can -- when designed well -- help users identify and
avoid disinformation. We show a path forward for designing effective warnings,
and we contribute repeatable methods for evaluating behavioral effects. We also
surface a possible dilemma: disinformation warnings might be able to inform
users and guide behavior, but the behavioral effects might result from user
experience friction, not informed decision making.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Banal Deception Human-AI Ecosystems: A Study of People's Perceptions of LLM-generated Deceptive Behaviour [11.285775969393566]
Large language models (LLMs) can provide users with false, inaccurate, or misleading information.
We investigate peoples' perceptions of ChatGPT-generated deceptive behaviour.
arXiv Detail & Related papers (2024-06-12T16:36:06Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Can Sensitive Information Be Deleted From LLMs? Objectives for Defending
Against Extraction Attacks [73.53327403684676]
We propose an attack-and-defense framework for studying the task of deleting sensitive information directly from model weights.
We study direct edits to model weights because this approach should guarantee that particular deleted information is never extracted by future prompt attacks.
We show that even state-of-the-art model editing methods such as ROME struggle to truly delete factual information from models like GPT-J, as our whitebox and blackbox attacks can recover "deleted" information from an edited model 38% of the time.
arXiv Detail & Related papers (2023-09-29T17:12:43Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - Informing Users: Effects of Notification Properties and User
Characteristics on Sharing Attitudes [5.371337604556311]
Information sharing on social networks is ubiquitous, intuitive, and occasionally accidental.
People may be unaware of the potential negative consequences of disclosures, such as reputational damages.
We investigate how to aid informed sharing decisions and associate them with the potential outcomes via notifications.
arXiv Detail & Related papers (2022-07-05T20:39:02Z) - Meaningful Context, a Red Flag, or Both? Users' Preferences for Enhanced
Misinformation Warnings on Twitter [6.748225062396441]
This study proposes user-tailored improvements in the soft moderation of misinformation on social media.
We ran an A/B evaluation with the Twitter's original warning tags in a 337 participant usability study.
The majority of the participants preferred the enhancements as a nudge toward recognizing and avoiding misinformation.
arXiv Detail & Related papers (2022-05-02T22:47:49Z) - PROVENANCE: An Intermediary-Free Solution for Digital Content
Verification [3.82273842587301]
Provenance warns users when the content they are looking at may be misinformation or disinformation.
It is also designed to improve media literacy among its users.
Unlike similar plugins, which require human experts to provide evaluations, Provenance's state of the art technology does not require human input.
arXiv Detail & Related papers (2021-11-16T21:42:23Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.