The Managerial Effects of Algorithmic Fairness Activism
- URL: http://arxiv.org/abs/2012.02393v1
- Date: Fri, 4 Dec 2020 04:11:31 GMT
- Title: The Managerial Effects of Algorithmic Fairness Activism
- Authors: Bo Cowgill, Fabrizio Dell'Acqua and Sandra Matz
- Abstract summary: We randomly expose business decision-makers to arguments used in AI fairness activism.
Arguments emphasizing the inescapability of algorithmic bias lead managers to abandon AI for manual review by humans.
Emphasis on status quo comparisons yields opposite effects.
- Score: 1.942960499551692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How do ethical arguments affect AI adoption in business? We randomly expose
business decision-makers to arguments used in AI fairness activism. Arguments
emphasizing the inescapability of algorithmic bias lead managers to abandon AI
for manual review by humans and report greater expectations about lawsuits and
negative PR. These effects persist even when AI lowers gender and racial
disparities and when engineering investments to address AI fairness are
feasible. Emphasis on status quo comparisons yields opposite effects. We also
measure the effects of "scientific veneer" in AI ethics arguments. Scientific
veneer changes managerial behavior but does not asymmetrically benefit
favorable (versus critical) AI activism.
Related papers
- AI Debate Aids Assessment of Controversial Claims [86.47978525513236]
We study whether AI debate can guide biased judges toward the truth by having two AI systems debate opposing sides of controversial COVID-19 factuality claims.<n>In our human study, we find that debate-where two AI advisor systems present opposing evidence-based arguments-consistently improves judgment accuracy and confidence calibration.<n>In our AI judge study, we find that AI judges with human-like personas achieve even higher accuracy (78.5%) than human judges (70.1%) and default AI judges without personas (69.8%)
arXiv Detail & Related papers (2025-06-02T19:01:53Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints [0.7252027234425334]
This paper is based on 75 interviews with technologists including researchers, developers, open source contributors, and activists.
I show how some AI ethics practices have reached toward authority from automation and quantification, while those based on richly embodied and situated lived experience have not.
arXiv Detail & Related papers (2024-02-13T02:07:03Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Acceleration AI Ethics, the Debate between Innovation and Safety, and
Stability AI's Diffusion versus OpenAI's Dall-E [0.0]
This presentation responds by reconfiguring ethics as an innovation accelerator.
The work of ethics is embedded in AI development and application, instead of functioning from outside.
arXiv Detail & Related papers (2022-12-04T14:54:13Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - From the Ground Truth Up: Doing AI Ethics from Practice to Principles [0.0]
Recent AI ethics has focused on applying abstract principles downward to practice.
This paper moves in the other direction.
Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems.
arXiv Detail & Related papers (2022-01-05T15:33:33Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.