Responsibility in a Multi-Value Strategic Setting
- URL: http://arxiv.org/abs/2410.17229v1
- Date: Tue, 22 Oct 2024 17:51:13 GMT
- Title: Responsibility in a Multi-Value Strategic Setting
- Authors: Timothy Parker, Umberto Grandi, Emiliano Lorini,
- Abstract summary: Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
- Score: 12.143925288392166
- License:
- Abstract: Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI. However, most previous work on responsibility has only considered responsibility for single outcomes. In this paper we present a model for responsibility attribution in a multi-agent, multi-value setting. We also expand our model to cover responsibility anticipation, demonstrating how considerations of responsibility can help an agent to select strategies that are in line with its values. In particular we show that non-dominated regret-minimising strategies reliably minimise an agent's expected degree of responsibility.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Responsibility-aware Strategic Reasoning in Probabilistic Multi-Agent Systems [1.7819574476785418]
Responsibility plays a key role in the development and deployment of trustworthy autonomous systems.
We introduce the logic PATL+R, a variant of Probabilistic Alternating-time Temporal Logic.
We present an approach to synthesise joint strategies that satisfy an outcome specified in PATL+R.
arXiv Detail & Related papers (2024-10-31T18:49:12Z) - Measuring Responsibility in Multi-Agent Systems [1.5883812630616518]
We introduce a family of quantitative measures of responsibility in multi-agent planning.
We ascribe responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics.
An entropy-based measurement of responsibility is the first to capture the causal responsibility properties of outcomes over time.
arXiv Detail & Related papers (2024-10-31T18:45:34Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Agent-Oriented Planning in Multi-Agent Systems [54.429028104022066]
We propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process.
We integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process.
arXiv Detail & Related papers (2024-10-03T04:07:51Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Multi-Agent Imitation Learning: Value is Easy, Regret is Hard [52.31989962031179]
We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to coordinate a group of agents.
Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert within the support of the demonstrations.
While doing so is sufficient to drive the value gap between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee to deviations by strategic agents.
arXiv Detail & Related papers (2024-06-06T16:18:20Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Anticipating Responsibility in Multiagent Planning [9.686474898346392]
Responsibility anticipation is a process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider.
arXiv Detail & Related papers (2023-07-31T13:58:49Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems [0.0]
We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
arXiv Detail & Related papers (2020-07-09T13:19:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.