Mathematical Framework for Online Social Media Auditing
- URL: http://arxiv.org/abs/2209.05550v2
- Date: Tue, 20 Feb 2024 08:37:21 GMT
- Title: Mathematical Framework for Online Social Media Auditing
- Authors: Wasim Huleihel and Yehonathan Refael
- Abstract summary: Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards.
Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making.
We mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees.
- Score: 5.384630221560811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means
of selecting the content that constitutes a user's feed with the aim of
maximizing their rewards. Selectively choosing the contents to be shown on the
user's feed may yield a certain extent of influence, either minor or major, on
the user's decision-making, compared to what it would have been under a
natural/fair content selection. As we have witnessed over the past decade,
algorithmic filtering can cause detrimental side effects, ranging from biasing
individual decisions to shaping those of society as a whole, for example,
diverting users' attention from whether to get the COVID-19 vaccine or inducing
the public to choose a presidential candidate. The government's constant
attempts to regulate the adverse effects of AF are often complicated, due to
bureaucracy, legal affairs, and financial considerations. On the other hand
SMPs seek to monitor their own algorithmic activities to avoid being fined for
exceeding the allowable threshold. In this paper, we mathematically formalize
this framework and utilize it to construct a data-driven statistical auditing
procedure to regulate AF from deflecting users' beliefs over time, along with
sample complexity guarantees. This state-of-the-art algorithm can be used
either by authorities acting as external regulators or by SMPs for
self-auditing.
Related papers
- Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Influence of the algorithm's reliability and transparency in the user's
decision-making process [0.0]
We conduct an online empirical study with 61 participants to find out how the change in transparency and reliability of an algorithm could impact users' decision-making process.
The results indicate that people show at least moderate confidence in the decisions of the algorithm even when the reliability is bad.
arXiv Detail & Related papers (2023-07-13T03:13:49Z) - Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Learning to be Fair: A Consequentialist Approach to Equitable
Decision-Making [21.152377319502705]
We present an alternative framework for designing equitable algorithms.
In our approach, one first elicits stakeholder preferences over the space of possible decisions.
We then optimize over the space of decision policies, making trade-offs in a way that maximizes the elicited utility.
arXiv Detail & Related papers (2021-09-18T00:30:43Z) - Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent
Analysis of Regulating Social Media Platforms [2.9747815715612713]
The spread of disinformation on social media platforms such as Facebook is harmful to society.
filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort.
Since the costs of harmful content are borne by other entities, the platform has no incentive to filter at a socially-optimal level.
arXiv Detail & Related papers (2021-06-17T23:27:43Z) - Regulating algorithmic filtering on social media [14.873907857806357]
Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
arXiv Detail & Related papers (2020-06-17T04:14:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.