The Design and Operation of Digital Platform under Sociotechnical Folk
Theories
- URL: http://arxiv.org/abs/2305.03291v1
- Date: Fri, 5 May 2023 05:47:37 GMT
- Title: The Design and Operation of Digital Platform under Sociotechnical Folk
Theories
- Authors: Jordan W. Suchow, Lea Burton, Vahid Ashrafimoghari
- Abstract summary: We consider the problem of how a platform designer, owner, or operator can improve the design and operation of a digital platform.
We do so in the context of Reddit, a social media platform whose owners and administrators make extensive use of shadowbanning.
We develop a computational cognitive model of users's folk theories about the antecedents and consequences of shadowbanning.
- Score: 0.4297070083645048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of how a platform designer, owner, or operator can
improve the design and operation of a digital platform by leveraging a
computational cognitive model that represents users's folk theories about a
platform as a sociotechnical system. We do so in the context of Reddit, a
social media platform whose owners and administrators make extensive use of
shadowbanning, a non-transparent content moderation mechanism that filters a
user's posts and comments so that they cannot be seen by fellow community
members or the public. After demonstrating that the design and operation of
Reddit have led to an abundance of spurious suspicions of shadowbanning in case
the mechanism was not in fact invoked, we develop a computational cognitive
model of users's folk theories about the antecedents and consequences of
shadowbanning that predicts when users will attribute their on-platform
observations to a shadowban. The model is then used to evaluate the capacity of
interventions available to a platform designer, owner, and operator to reduce
the incidence of these false suspicions. We conclude by considering the
implications of this approach for the design and operation of digital platforms
at large.
Related papers
- How Decentralization Affects User Agency on Social Platforms [0.0]
We investigate how decentralization might present promise as an alternative model to walled garden platforms.
We describe the user-driven content moderation through blocks as an expression of agency on Bluesky, a decentralized social platform.
arXiv Detail & Related papers (2024-06-13T12:15:15Z) - Designing a Dashboard for Transparency and Control of Conversational AI [39.01999161106776]
We present an end-to-end prototype-connecting interpretability techniques with user experience design.
Our results suggest that users appreciate seeing internal states, which helped them expose biased behavior and increased their sense of control.
arXiv Detail & Related papers (2024-06-12T05:20:16Z) - Content Moderation Justice and Fairness on Social Media: Comparisons
Across Different Contexts and Platforms [23.735552021636245]
We conduct an online experiment on 200 American social media users of Reddit and Twitter.
We find that retributive moderation delivers higher justice and fairness for commercially moderated platforms in illegal violations.
We discuss the opportunities for platform policymaking to improve moderation system design.
arXiv Detail & Related papers (2024-03-09T22:50:06Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Explainable Abuse Detection as Intent Classification and Slot Filling [66.80201541759409]
We introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone.
We show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
arXiv Detail & Related papers (2022-10-06T03:33:30Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - The Challenge of Understanding What Users Want: Inconsistent Preferences
and Engagement Optimization [2.690930520747925]
We develop a model of media consumption where users have inconsistent preferences.
We show how our model of users' preference inconsistencies produces phenomena that are familiar from everyday experience.
arXiv Detail & Related papers (2022-02-23T20:45:31Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Setting the Record Straighter on Shadow Banning [3.9103337761169943]
Shadow banning consists for an online social network in limiting the visibility of some of its users, without them being aware of it.
Twitter declares that it does not use such a practice, sometimes arguing about the occurrence of "bugs" to justify restrictions on some users.
This paper is the first to address the plausibility or not of shadow banning on a major online platform, by adopting both a statistical and a graph topological approach.
arXiv Detail & Related papers (2020-12-09T15:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.