Regulating algorithmic filtering on social media
- URL: http://arxiv.org/abs/2006.09647v4
- Date: Tue, 2 Nov 2021 12:07:05 GMT
- Title: Regulating algorithmic filtering on social media
- Authors: Sarah H. Cen and Devavrat Shah
- Abstract summary: Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
- Score: 14.873907857806357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By filtering the content that users see, social media platforms have the
ability to influence users' perceptions and decisions, from their dining
choices to their voting preferences. This influence has drawn scrutiny, with
many calling for regulations on filtering algorithms, but designing and
enforcing regulations remains challenging. In this work, we examine three
questions. First, given a regulation, how would one design an audit to enforce
it? Second, does the audit impose a performance cost on the platform? Third,
how does the audit affect the content that the platform is incentivized to
filter? In response, we propose a method such that, given a regulation, an
auditor can test whether that regulation is met with only black-box access to
the filtering algorithm. We then turn to the platform's perspective. The
platform's goal is to maximize an objective function while meeting regulation.
We find that there are conditions under which the regulation does not place a
high performance cost on the platform and, notably, that content diversity can
play a key role in aligning the interests of the platform and regulators.
Related papers
- Tracing Influence at Scale: A Contrastive Learning Approach to Linking
Public Comments and Regulator Responses [22.240224575601644]
U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations.
measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing.
We propose a simple yet effective solution by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators.
arXiv Detail & Related papers (2023-11-24T23:32:13Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Mathematical Framework for Online Social Media Auditing [5.384630221560811]
Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards.
Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making.
We mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees.
arXiv Detail & Related papers (2022-09-12T19:04:14Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - An Empirical Investigation of Personalization Factors on TikTok [77.34726150561087]
Despite the importance of TikTok's algorithm to the platform's success and content distribution, little work has been done on the empirical analysis of the algorithm.
Using a sock-puppet audit methodology with a custom algorithm developed by us, we tested and analysed the effect of the language and location used to access TikTok.
We identify that the follow-feature has the strongest influence, followed by the like-feature and video view rate.
arXiv Detail & Related papers (2022-01-28T17:40:00Z) - Obvious Manipulability of Voting Rules [105.35249497503527]
The Gibbard-Satterthwaite theorem states that no unanimous and non-dictatorial voting rule is strategyproof.
We revisit voting rules and consider a weaker notion of strategyproofness called not obvious manipulability.
arXiv Detail & Related papers (2021-11-03T02:41:48Z) - Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent
Analysis of Regulating Social Media Platforms [2.9747815715612713]
The spread of disinformation on social media platforms such as Facebook is harmful to society.
filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort.
Since the costs of harmful content are borne by other entities, the platform has no incentive to filter at a socially-optimal level.
arXiv Detail & Related papers (2021-06-17T23:27:43Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.