Consent as a Foundation for Responsible Autonomy
- URL: http://arxiv.org/abs/2203.11420v1
- Date: Tue, 22 Mar 2022 02:25:27 GMT
- Title: Consent as a Foundation for Responsible Autonomy
- Authors: Munindar P. Singh
- Abstract summary: This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time.
It considers settings where decision making by agents impinges upon the outcomes perceived by other agents.
- Score: 15.45515784064555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on a dynamic aspect of responsible autonomy, namely, to
make intelligent agents be responsible at run time. That is, it considers
settings where decision making by agents impinges upon the outcomes perceived
by other agents. For an agent to act responsibly, it must accommodate the
desires and other attitudes of its users and, through other agents, of their
users.
The contribution of this paper is twofold. First, it provides a conceptual
analysis of consent, its benefits and misuses, and how understanding consent
can help achieve responsible autonomy. Second, it outlines challenges for AI
(in particular, for agents and multiagent systems) that merit investigation to
form as a basis for modeling consent in multiagent systems and applying consent
to achieve responsible autonomy.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Exploration and Persuasion [58.87314871998078]
We show how to incentivize self-interested agents to explore when they prefer to exploit.
Consider a population of self-interested agents that make decisions under uncertainty.
They "explore" to acquire new information and "exploit" this information to make good decisions.
This is because exploration is costly, but its benefits are spread over many agents in the future.
arXiv Detail & Related papers (2024-10-22T15:13:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Moral Responsibility for AI Systems [8.919993498343159]
Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition.
This paper presents a formal definition of both conditions within the framework of causal models.
arXiv Detail & Related papers (2023-10-27T10:37:47Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Hiding Behind Machines: When Blame Is Shifted to Artificial Agents [0.0]
This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
arXiv Detail & Related papers (2021-01-27T14:50:02Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Responsible AI and Its Stakeholders [14.129366395072026]
We discuss three notions of responsibility (i.e., blameworthiness, accountability, and liability) for all stakeholders, including AI, and suggest the roles of jurisdiction and the general public in this matter.
arXiv Detail & Related papers (2020-04-23T19:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.