Autonomy by Design: Preserving Human Autonomy in AI Decision-Support
- URL: http://arxiv.org/abs/2506.23952v3
- Date: Wed, 09 Jul 2025 09:55:27 GMT
- Title: Autonomy by Design: Preserving Human Autonomy in AI Decision-Support
- Authors: Stefan Buijsman, Sarah E. Carter, Juan Pablo Bermúdez,
- Abstract summary: We analyze how AI decision-support systems affect two key components of domain-specific autonomy.<n>We develop a constructive framework for autonomy-preserving AI support systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI systems increasingly support human decision-making across domains of professional, skill-based, and personal activity. While previous work has examined how AI might affect human autonomy globally, the effects of AI on domain-specific autonomy -- the capacity for self-governed action within defined realms of skill or expertise -- remain understudied. We analyze how AI decision-support systems affect two key components of domain-specific autonomy: skilled competence (the ability to make informed judgments within one's domain) and authentic value-formation (the capacity to form genuine domain-relevant values and preferences). By engaging with prior investigations and analyzing empirical cases across medical, financial, and educational domains, we demonstrate how the absence of reliable failure indicators and the potential for unconscious value shifts can erode domain-specific autonomy both immediately and over time. We then develop a constructive framework for autonomy-preserving AI support systems. We propose specific socio-technical design patterns -- including careful role specification, implementation of defeater mechanisms, and support for reflective practice -- that can help maintain domain-specific autonomy while leveraging AI capabilities. This framework provides concrete guidance for developing AI systems that enhance rather than diminish human agency within specialized domains of action.
Related papers
- The case for delegated AI autonomy for Human AI teaming in healthcare [3.441725960809854]
We propose an advanced approach to integrating artificial intelligence (AI) into healthcare: autonomous decision support.<n>This approach allows the AI algorithm to act autonomously for a subset of patient cases whilst serving a supportive role in other subsets of patient cases based on defined delegation criteria.<n>It ensures safe handling of patient cases and potentially reduces clinician review time, whilst being mindful of AI tool limitations.
arXiv Detail & Related papers (2025-03-24T15:26:54Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - A Measure for Level of Autonomy Based on Observable System Behavior [0.0]
We present a potential measure for predicting level of autonomy using observable actions.
We also present an algorithm incorporating the proposed measure.
The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime.
arXiv Detail & Related papers (2024-07-20T20:34:20Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems [0.0]
This paper focuses on the full autonomous mode and proposes a quantitative autonomy assessment framework based on task requirements.
The framework provides not only a tool for quantifying autonomy, but also a regulatory interface and common language for autonomous systems developers and users.
arXiv Detail & Related papers (2023-11-03T14:26:53Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Improving Competence for Reliable Autonomy [0.0]
We propose a method for improving the competence of a system over the course of its deployment.
We specifically focus on a class of semi-autonomous systems known as competence-aware systems.
Our method exploits such feedback to identify important state features missing from the system's initial model.
arXiv Detail & Related papers (2020-07-23T01:31:28Z) - Learning to Optimize Autonomy in Competence-Aware Systems [32.3596917475882]
We propose an introspective model of autonomy that is learned and updated online through experience.
We define a competence-aware system (CAS) that explicitly models its own proficiency at different levels of autonomy and the available human feedback.
We analyze the convergence properties of CAS and provide experimental results for robot delivery and autonomous driving domains.
arXiv Detail & Related papers (2020-03-17T14:31:45Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.