The right to audit and power asymmetries in algorithm auditing
- URL: http://arxiv.org/abs/2302.08301v1
- Date: Thu, 16 Feb 2023 13:57:41 GMT
- Title: The right to audit and power asymmetries in algorithm auditing
- Authors: Aleksandra Urman, Ivan Smirnov, Jana Lasser
- Abstract summary: We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we engage with and expand on the keynote talk about the Right
to Audit given by Prof. Christian Sandvig at the IC2S2 2021 through a critical
reflection on power asymmetries in the algorithm auditing field. We elaborate
on the challenges and asymmetries mentioned by Sandvig - such as those related
to legal issues and the disparity between early-career and senior researchers.
We also contribute a discussion of the asymmetries that were not covered by
Sandvig but that we find critically important: those related to other
disparities between researchers, incentive structures related to the access to
data from companies, targets of auditing and users and their rights. We also
discuss the implications these asymmetries have for algorithm auditing research
such as the Western-centrism and the lack of the diversity of perspectives.
While we focus on the field of algorithm auditing specifically, we suggest some
of the discussed asymmetries affect Computational Social Science more generally
and need to be reflected on and addressed.
Related papers
- Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Algorithmic Fairness in Business Analytics: Directions for Research and
Practice [24.309795052068388]
This paper offers a forward-looking, BA-focused review of algorithmic fairness.
We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms.
We then provide a detailed discussion of the utility-fairness relationship, emphasizing that the frequent assumption of a trade-off between these two constructs is often mistaken or short-sighted.
arXiv Detail & Related papers (2022-07-22T10:21:38Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - A Validity Perspective on Evaluating the Justified Use of Data-driven
Decision-making Algorithms [14.96024118861361]
We apply the lens of validity to re-examine challenges in problem formulation and data issues that jeopardize the justifiability of using predictive algorithms.
We demonstrate how these validity considerations could distill into a series of high-level questions intended to promote and document reflections on the legitimacy of the predictive task and the suitability of the data.
arXiv Detail & Related papers (2022-06-30T02:22:31Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Algorithmic audits of algorithms, and the law [3.9103337761169943]
We focus on external audits that are conducted by interacting with the user side of the target algorithm.
The legal framework in which these audits take place is mostly ambiguous to researchers developing them.
This article highlights the relation of current audits with law, in order to structure the growing field of algorithm auditing.
arXiv Detail & Related papers (2022-02-15T14:20:53Z) - Problematic Machine Behavior: A Systematic Literature Review of
Algorithm Audits [0.0]
This review follows PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies.
The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement)
The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research.
arXiv Detail & Related papers (2021-02-03T19:21:11Z) - Saving Face: Investigating the Ethical Concerns of Facial Recognition
Auditing [17.42753238926119]
algorithmic auditing can have effects that may harm the very populations these measures are meant to protect.
We demonstrate a set of five ethical concerns in the particular case of auditing commercial facial processing technology.
We go further to provide tangible illustrations of these concerns, and conclude by reflecting on what these concerns mean for the role of the algorithmic audit and the fundamental product limitations they reveal.
arXiv Detail & Related papers (2020-01-03T20:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.