A User-Driven Framework for Regulating and Auditing Social Media
- URL: http://arxiv.org/abs/2304.10525v1
- Date: Thu, 20 Apr 2023 17:53:34 GMT
- Title: A User-Driven Framework for Regulating and Auditing Social Media
- Authors: Sarah H. Cen, Aleksander Madry, Devavrat Shah
- Abstract summary: We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
- Score: 94.70018274127231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People form judgments and make decisions based on the information that they
observe. A growing portion of that information is not only provided, but
carefully curated by social media platforms. Although lawmakers largely agree
that platforms should not operate without any oversight, there is little
consensus on how to regulate social media. There is consensus, however, that
creating a strict, global standard of "acceptable" content is untenable (e.g.,
in the US, it is incompatible with Section 230 of the Communications Decency
Act and the First Amendment).
In this work, we propose that algorithmic filtering should be regulated with
respect to a flexible, user-driven baseline. We provide a concrete framework
for regulating and auditing a social media platform according to such a
baseline. In particular, we introduce the notion of a baseline feed: the
content that a user would see without filtering (e.g., on Twitter, this could
be the chronological timeline). We require that the feeds a platform filters
contain "similar" informational content as their respective baseline feeds, and
we design a principled way to measure similarity. This approach is motivated by
related suggestions that regulations should increase user agency. We present an
auditing procedure that checks whether a platform honors this requirement.
Notably, the audit needs only black-box access to a platform's filtering
algorithm, and it does not access or infer private user information. We provide
theoretical guarantees on the strength of the audit. We further show that
requiring closeness between filtered and baseline feeds does not impose a large
performance cost, nor does it create echo chambers.
Related papers
- Content Moderation on Social Media in the EU: Insights From the DSA
Transparency Database [0.0]
Digital Services Act (DSA) requires large social media platforms in the EU to provide clear and specific information whenever they restrict access to certain content.
Statements of Reasons (SoRs) are collected in the DSA Transparency Database to ensure transparency and scrutiny of content moderation decisions.
We empirically analyze 156 million SoRs within an observation period of two months to provide an early look at content moderation decisions of social media platforms in the EU.
arXiv Detail & Related papers (2023-12-07T16:56:19Z) - Explainable Abuse Detection as Intent Classification and Slot Filling [66.80201541759409]
We introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone.
We show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
arXiv Detail & Related papers (2022-10-06T03:33:30Z) - Mathematical Framework for Online Social Media Auditing [5.384630221560811]
Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards.
Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making.
We mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees.
arXiv Detail & Related papers (2022-09-12T19:04:14Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent
Analysis of Regulating Social Media Platforms [2.9747815715612713]
The spread of disinformation on social media platforms such as Facebook is harmful to society.
filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort.
Since the costs of harmful content are borne by other entities, the platform has no incentive to filter at a socially-optimal level.
arXiv Detail & Related papers (2021-06-17T23:27:43Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z) - Regulating algorithmic filtering on social media [14.873907857806357]
Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
arXiv Detail & Related papers (2020-06-17T04:14:20Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.