Personal Moderation Configurations on Facebook: Exploring the Role of
FoMO, Social Media Addiction, Norms, and Platform Trust
- URL: http://arxiv.org/abs/2401.05603v2
- Date: Sun, 3 Mar 2024 21:42:24 GMT
- Title: Personal Moderation Configurations on Facebook: Exploring the Role of
FoMO, Social Media Addiction, Norms, and Platform Trust
- Authors: Shagun Jhaver
- Abstract summary: Fear of missing out (FoMO) and social media addiction make Facebook users more vulnerable to content-based harms.
Trust in Facebook's moderation systems also significantly affects users' engagement with personal moderation.
- Score: 1.7223564681760166
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Personal moderation tools on social media platforms let users control their
news feeds by configuring acceptable toxicity thresholds for their feed content
or muting inappropriate accounts. This research examines how four critical
psychosocial factors - fear of missing out (FoMO), social media addiction,
subjective norms, and trust in moderation systems - shape Facebook users'
configuration of these tools. Findings from a nationally representative sample
of 1,061 participants show that FoMO and social media addiction make Facebook
users more vulnerable to content-based harms by reducing their likelihood of
adopting personal moderation tools to hide inappropriate posts. In contrast,
descriptive and injunctive norms positively influence the use of these tools.
Further, trust in Facebook's moderation systems also significantly affects
users' engagement with personal moderation. This analysis highlights
qualitatively different pathways through which FoMO and social media addiction
make affected users disproportionately unsafe and offers design and policy
solutions to address this challenge.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - How Decentralization Affects User Agency on Social Platforms [0.0]
We investigate how decentralization might present promise as an alternative model to walled garden platforms.
We describe the user-driven content moderation through blocks as an expression of agency on Bluesky, a decentralized social platform.
arXiv Detail & Related papers (2024-06-13T12:15:15Z) - Personalized Content Moderation and Emergent Outcomes [0.0]
Social media platforms have implemented automated content moderation tools to preserve community norms and mitigate online hate and harassment.
Recently, these platforms have started to offer Personalized Content Moderation (PCM), granting users control over moderation settings or aligning algorithms with individual user preferences.
Our study reveals that PCM leads to asymmetric information loss (AIL), potentially impeding the development of a shared understanding among users.
arXiv Detail & Related papers (2024-05-15T18:07:36Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users [4.609388510200741]
Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
arXiv Detail & Related papers (2021-07-16T17:39:32Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.