Consent as a Foundation for Responsible Autonomy
- URL: http://arxiv.org/abs/2203.11420v1
- Date: Tue, 22 Mar 2022 02:25:27 GMT
- Title: Consent as a Foundation for Responsible Autonomy
- Authors: Munindar P. Singh
- Abstract summary: This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time.
It considers settings where decision making by agents impinges upon the outcomes perceived by other agents.
- Score: 15.45515784064555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on a dynamic aspect of responsible autonomy, namely, to
make intelligent agents be responsible at run time. That is, it considers
settings where decision making by agents impinges upon the outcomes perceived
by other agents. For an agent to act responsibly, it must accommodate the
desires and other attitudes of its users and, through other agents, of their
users.
The contribution of this paper is twofold. First, it provides a conceptual
analysis of consent, its benefits and misuses, and how understanding consent
can help achieve responsible autonomy. Second, it outlines challenges for AI
(in particular, for agents and multiagent systems) that merit investigation to
form as a basis for modeling consent in multiagent systems and applying consent
to achieve responsible autonomy.
Related papers
- Agency Is Frame-Dependent [94.91580596320331]
Agency is a system's capacity to steer outcomes toward a goal.
We argue that agency is fundamentally frame-dependent.
We conclude that any basic science of agency requires frame-dependence.
arXiv Detail & Related papers (2025-02-06T08:34:57Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Agentic AI: Autonomy, Accountability, and the Algorithmic Society [0.2209921757303168]
Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.
This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.
We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
arXiv Detail & Related papers (2025-02-01T03:14:59Z) - Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits [1.7205106391379026]
As AI systems operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged.
This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices.
arXiv Detail & Related papers (2024-11-06T18:40:38Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Hiding Behind Machines: When Blame Is Shifted to Artificial Agents [0.0]
This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
arXiv Detail & Related papers (2021-01-27T14:50:02Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.