Safeguarding Autonomy: a Focus on Machine Learning Decision Systems
- URL: http://arxiv.org/abs/2503.22023v1
- Date: Thu, 27 Mar 2025 22:31:16 GMT
- Title: Safeguarding Autonomy: a Focus on Machine Learning Decision Systems
- Authors: Paula Subías-Beltrán, Oriol Pujol, Itziar de Lecuona,
- Abstract summary: We focus on the different stages of the ML pipeline to identify the potential effects on ML end-users' autonomy.<n>We propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.
- Score: 0.16385815610837165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As global discourse on AI regulation gains momentum, this paper focuses on delineating the impact of ML on autonomy and fostering awareness. Respect for autonomy is a basic principle in bioethics that establishes persons as decision-makers. While the concept of autonomy in the context of ML appears in several European normative publications, it remains a theoretical concept that has yet to be widely accepted in ML practice. Our contribution is to bridge the theoretical and practical gap by encouraging the practical application of autonomy in decision-making within ML practice by identifying the conditioning factors that currently prevent it. Consequently, we focus on the different stages of the ML pipeline to identify the potential effects on ML end-users' autonomy. To improve its practical utility, we propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.
Related papers
- Autonomy by Design: Preserving Human Autonomy in AI Decision-Support [0.0]
We analyze how AI decision-support systems affect two key components of domain-specific autonomy.<n>We develop a constructive framework for autonomy-preserving AI support systems.
arXiv Detail & Related papers (2025-06-30T15:20:10Z) - Scaling and Beyond: Advancing Spatial Reasoning in MLLMs Requires New Recipes [84.1059652774853]
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in general vision-language tasks.<n>Recent studies have exposed critical limitations in their spatial reasoning capabilities.<n>This deficiency in spatial reasoning significantly constrains MLLMs' ability to interact effectively with the physical world.
arXiv Detail & Related papers (2025-04-21T11:48:39Z) - A Framework for a Capability-driven Evaluation of Scenario Understanding for Multimodal Large Language Models in Autonomous Driving [15.24721920935653]
Multimodal large language models (MLLMs) hold the potential to enhance autonomous driving.<n>Their integration into autonomous driving systems shows promising results in isolated proof-of-concept applications.<n>This paper proposes a holistic framework for a capability-driven evaluation of MLLMs in autonomous driving.
arXiv Detail & Related papers (2025-03-14T13:43:26Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.
We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for Skill Acquisition in Open-Ended Environments [1.104960878651584]
This paper presents a comprehensive overview of autotelic Reinforcement Learning (RL), emphasizing the role of intrinsic motivations in the open-ended formation of skill repertoires.
We delineate the distinctions between knowledge-based and competence-based intrinsic motivations, illustrating how these concepts inform the development of autonomous agents capable of generating and pursuing self-defined goals.
arXiv Detail & Related papers (2025-02-06T14:37:46Z) - Metacognition for Unknown Situations and Environments (MUSE) [3.2020845462590697]
We propose the Metacognition for Unknown Situations and Environments (MUSE) framework.
MUSE integrates metacognitive processes--specifically self-awareness and self-regulation--into autonomous agents.
Agents show significant improvements in self-awareness and self-regulation.
arXiv Detail & Related papers (2024-11-20T18:41:03Z) - Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning [77.72128397088409]
We show that most prevalent MLLMs can be easily fooled by the introduction of a presupposition into the question.
We also propose a novel reinforcement learning paradigm to encourage the model to actively perform composite deduction.
arXiv Detail & Related papers (2024-04-19T15:53:27Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Fairness: from the ethical principle to the practice of Machine Learning
development as an ongoing agreement with stakeholders [0.0]
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML)
It proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development.
arXiv Detail & Related papers (2023-03-22T20:58:32Z) - Non-Determinism and the Lawlessness of Machine Learning Code [43.662736664344095]
We show that the effects of non-determinism, and consequently its implications for the law, become clearer from the perspective of reasoning about ML outputs as distributions over possible outcomes.
We conclude with a brief discussion of what work ML can do to constrain the potentially harm-inducing effects of non-determinism.
arXiv Detail & Related papers (2022-06-23T17:05:34Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.