Informing Users: Effects of Notification Properties and User
Characteristics on Sharing Attitudes
- URL: http://arxiv.org/abs/2207.02292v1
- Date: Tue, 5 Jul 2022 20:39:02 GMT
- Title: Informing Users: Effects of Notification Properties and User
Characteristics on Sharing Attitudes
- Authors: Yefim Shulman, Agnieszka Kitkowska, Joachim Meyer
- Abstract summary: Information sharing on social networks is ubiquitous, intuitive, and occasionally accidental.
People may be unaware of the potential negative consequences of disclosures, such as reputational damages.
We investigate how to aid informed sharing decisions and associate them with the potential outcomes via notifications.
- Score: 5.371337604556311
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Information sharing on social networks is ubiquitous, intuitive, and
occasionally accidental. However, people may be unaware of the potential
negative consequences of disclosures, such as reputational damages. Yet, people
use social networks to disclose information about themselves or others, advised
only by their own experiences and the context-invariant informed consent
mechanism. In two online experiments (N=515 and N=765), we investigated how to
aid informed sharing decisions and associate them with the potential outcomes
via notifications. Based on the measurements of sharing attitudes, our results
showed that the effectiveness of informing the users via notifications may
depend on the timing, content, and layout of the notifications, as well as on
the users' curiosity and rational cognitive style, motivating information
processing. Furthermore, positive emotions may result in disregard of important
information. We discuss the implications for user privacy and
self-presentation. We provide recommendations on privacy-supporting system
design and suggest directions for further research.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - "I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation [3.6754294738197264]
We conducted an online study with 655 US participants to explore user perceptions of eight categories of fact-checking entities across two misinformation topics.
Our results hint at the need for further exploring fact-checking entities that may be perceived as neutral, as well as the potential for incorporating multiple assessments in such labels.
arXiv Detail & Related papers (2024-10-01T17:01:09Z) - Banal Deception Human-AI Ecosystems: A Study of People's Perceptions of LLM-generated Deceptive Behaviour [11.285775969393566]
Large language models (LLMs) can provide users with false, inaccurate, or misleading information.
We investigate peoples' perceptions of ChatGPT-generated deceptive behaviour.
arXiv Detail & Related papers (2024-06-12T16:36:06Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Adapting Security Warnings to Counter Online Disinformation [6.592035021489205]
We adapt methods and results from the information security warning literature to design effective disinformation warnings.
We found that users routinely ignore contextual warnings, but users notice interstitial warnings.
We found that a warning's design could effectively inform users or convey a risk of harm.
arXiv Detail & Related papers (2020-08-25T01:10:57Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.