Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent
Analysis of Regulating Social Media Platforms
- URL: http://arxiv.org/abs/2106.09847v1
- Date: Thu, 17 Jun 2021 23:27:43 GMT
- Title: Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent
Analysis of Regulating Social Media Platforms
- Authors: Shehroze Khan and James R. Wright
- Abstract summary: The spread of disinformation on social media platforms such as Facebook is harmful to society.
filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort.
Since the costs of harmful content are borne by other entities, the platform has no incentive to filter at a socially-optimal level.
- Score: 2.9747815715612713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The spread of disinformation on social media platforms such as Facebook is
harmful to society. This harm can take the form of a gradual degradation of
public discourse; but it can also take the form of sudden dramatic events such
as the recent insurrection on Capitol Hill. The platforms themselves are in the
best position to prevent the spread of disinformation, as they have the best
access to relevant data and the expertise to use it. However, filtering
disinformation is costly, not only for implementing filtering algorithms or
employing manual filtering effort, but also because removing such highly viral
content impacts user growth and thus potential advertising revenue. Since the
costs of harmful content are borne by other entities, the platform will
therefore have no incentive to filter at a socially-optimal level. This problem
is similar to the problem of environmental regulation, in which the costs of
adverse events are not directly borne by a firm, the mitigation effort of a
firm is not observable, and the causal link between a harmful consequence and a
specific failure is difficult to prove. In the environmental regulation domain,
one solution to this issue is to perform costly monitoring to ensure that the
firm takes adequate precautions according a specified rule. However,
classifying disinformation is performative, and thus a fixed rule becomes less
effective over time. Encoding our domain as a Markov decision process, we
demonstrate that no penalty based on a static rule, no matter how large, can
incentivize adequate filtering by the platform. Penalties based on an adaptive
rule can incentivize optimal effort, but counterintuitively, only if the
regulator sufficiently overreacts to harmful events by requiring a
greater-than-optimal level of filtering.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Explainable Abuse Detection as Intent Classification and Slot Filling [66.80201541759409]
We introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone.
We show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
arXiv Detail & Related papers (2022-10-06T03:33:30Z) - Mathematical Framework for Online Social Media Auditing [5.384630221560811]
Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards.
Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making.
We mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees.
arXiv Detail & Related papers (2022-09-12T19:04:14Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Pile of Law: Learning Responsible Data Filtering from the Law and a
256GB Open-Source Legal Dataset [46.156169284961045]
We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material.
First, we gather and make available the Pile of Law, a 256GB dataset of open-source English-language legal and administrative data.
Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons.
Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data.
arXiv Detail & Related papers (2022-07-01T06:25:15Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z) - Regulating algorithmic filtering on social media [14.873907857806357]
Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
arXiv Detail & Related papers (2020-06-17T04:14:20Z) - ETHOS: an Online Hate Speech Detection Dataset [6.59720246184989]
We present 'ETHOS', a textual dataset with two variants: binary and multi-label, based on YouTube and Reddit comments validated using the Figure-Eight crowdsourcing platform.
Our key assumption is that, even gaining a small amount of labelled data from such a time-consuming process, we can guarantee hate speech occurrences in the examined material.
arXiv Detail & Related papers (2020-06-11T08:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.