Problematic Machine Behavior: A Systematic Literature Review of
Algorithm Audits
- URL: http://arxiv.org/abs/2102.04256v1
- Date: Wed, 3 Feb 2021 19:21:11 GMT
- Title: Problematic Machine Behavior: A Systematic Literature Review of
Algorithm Audits
- Authors: Jack Bandy
- Abstract summary: This review follows PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies.
The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement)
The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While algorithm audits are growing rapidly in commonality and public
importance, relatively little scholarly work has gone toward synthesizing prior
work and strategizing future research in the area. This systematic literature
review aims to do just that, following PRISMA guidelines in a review of over
500 English articles that yielded 62 algorithm audit studies. The studies are
synthesized and organized primarily by behavior (discrimination, distortion,
exploitation, and misjudgement), with codes also provided for domain (e.g.
search, vision, advertising, etc.), organization (e.g. Google, Facebook,
Amazon, etc.), and audit method (e.g. sock puppet, direct scrape,
crowdsourcing, etc.). The review shows how previous audit studies have exposed
public-facing algorithms exhibiting problematic behavior, such as search
algorithms culpable of distortion and advertising algorithms culpable of
discrimination. Based on the studies reviewed, it also suggests some behaviors
(e.g. discrimination on the basis of intersectional identities), domains (e.g.
advertising algorithms), methods (e.g. code auditing), and organizations (e.g.
Twitter, TikTok, LinkedIn) that call for future audit attention. The paper
concludes by offering the common ingredients of successful audits, and
discussing algorithm auditing in the context of broader research working toward
algorithmic justice.
Related papers
- Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation [1.0470286407954037]
Concerns have been raised that machine learning (ML) models may be biased and perpetuate or exacerbate inequality.
We present a four-stage model of developing ML assessments and applying bias mitigation methods.
arXiv Detail & Related papers (2024-10-21T02:32:14Z) - On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - The right to audit and power asymmetries in algorithm auditing [68.8204255655161]
We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
arXiv Detail & Related papers (2023-02-16T13:57:41Z) - Language Model Decoding as Likelihood-Utility Alignment [54.70547032876017]
We introduce a taxonomy that groups decoding strategies based on their implicit assumptions about how well the model's likelihood is aligned with the task-specific notion of utility.
Specifically, by analyzing the correlation between the likelihood and the utility of predictions across a diverse set of tasks, we provide the first empirical evidence supporting the proposed taxonomy.
arXiv Detail & Related papers (2022-10-13T17:55:51Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Everyday algorithm auditing: Understanding the power of everyday users
in surfacing harmful algorithmic behaviors [8.360589318502816]
We propose and explore the concept of everyday algorithm auditing, a process in which users detect, understand, and interrogate problematic machine behaviors.
We argue that everyday users are powerful in surfacing problematic machine behaviors that may elude detection via more centrally-organized forms of auditing.
arXiv Detail & Related papers (2021-05-06T21:50:47Z) - Algorithmic Fairness [11.650381752104298]
It is crucial to develop AI algorithms that are not only accurate but also objective and fair.
Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness.
arXiv Detail & Related papers (2020-01-21T19:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.