When Trust is Zero Sum: Automation Threat to Epistemic Agency
- URL: http://arxiv.org/abs/2408.08846v2
- Date: Mon, 19 Aug 2024 02:02:13 GMT
- Title: When Trust is Zero Sum: Automation Threat to Epistemic Agency
- Authors: Emmie Malone, Saleh Afroogh, Jason DCruz, Kush R Varshney,
- Abstract summary: Even in cases where workers keep their jobs, their agency within them might be severely downgraded.
Job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms.
- Score: 15.3187914835649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to. This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.
Related papers
- Reversing the Paradigm: Building AI-First Systems with Human Guidance [0.0]
The relationship between humans and artificial intelligence is no longer science fiction.<n>Rather than replacing humans, AI augments tasks, enhancing decisions with data.<n>The future of work is toward AI agents handling tasks autonomously.<n>This paper examines the technological and organizational changes needed to enable responsible adoption of AI-first systems.
arXiv Detail & Related papers (2025-06-13T21:48:44Z) - When Autonomy Breaks: The Hidden Existential Risk of AI [0.0]
I argue that there is an underappreciated risk in the slow and irrevocable decline of human autonomy.
What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted.
The biggest threat to humanity is not that machines will become more like humans, but that humans will become more like machines.
arXiv Detail & Related papers (2025-03-28T05:10:32Z) - Evaluating Intelligence via Trial and Error [59.80426744891971]
We introduce Survival Game as a framework to evaluate intelligence based on the number of failed attempts in a trial-and-error process.
When the expectation and variance of failure counts are both finite, it signals the ability to consistently find solutions to new challenges.
Our results show that while AI systems achieve the Autonomous Level in simple tasks, they are still far from it in more complex tasks.
arXiv Detail & Related papers (2025-02-26T05:59:45Z) - Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? [37.13209023718946]
Unchecked AI agency poses significant risks to public safety and security.
We discuss how these risks arise from current AI training methods.
We propose a core building block for further advances the development of a non-agentic AI system.
arXiv Detail & Related papers (2025-02-21T18:28:36Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - AI, Pluralism, and (Social) Compensation [1.5442389863546546]
A strategy in response to pluralistic values in a user population is to personalize an AI system.
If the AI can adapt to the specific values of each individual, then we can potentially avoid many of the challenges of pluralism.
However, if there is an external measure of success for the human-AI team, then the adaptive AI system may develop strategies to compensate for its human teammate.
arXiv Detail & Related papers (2024-04-30T04:41:47Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - On Avoiding Power-Seeking by Artificial Intelligence [93.9264437334683]
We do not know how to align a very intelligent AI agent's behavior with human interests.
I investigate whether we can build smart AI agents which have limited impact on the world, and which do not autonomously seek power.
arXiv Detail & Related papers (2022-06-23T16:56:21Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs
for Centaurs [22.52332536886295]
We present a novel formulation of the interaction between the human and the AI as a sequential game.
We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP.
We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human.
arXiv Detail & Related papers (2022-04-03T21:00:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.