Measuring Responsibility in Multi-Agent Systems
- URL: http://arxiv.org/abs/2411.00887v1
- Date: Thu, 31 Oct 2024 18:45:34 GMT
- Title: Measuring Responsibility in Multi-Agent Systems
- Authors: Chunyan Mu, Nir Oren,
- Abstract summary: We introduce a family of quantitative measures of responsibility in multi-agent planning.
We ascribe responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics.
An entropy-based measurement of responsibility is the first to capture the causal responsibility properties of outcomes over time.
- Score: 1.5883812630616518
- License:
- Abstract: We introduce a family of quantitative measures of responsibility in multi-agent planning, building upon the concepts of causal responsibility proposed by Parker et al.~[ParkerGL23]. These concepts are formalised within a variant of probabilistic alternating-time temporal logic. Unlike existing approaches, our framework ascribes responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics, including an entropy-based measurement of responsibility. This latter measure is the first to capture the causal responsibility properties of outcomes over time, offering an asymptotic measurement that reflects the difficulty of achieving these outcomes. Our approach provides a fresh understanding of responsibility in multi-agent systems, illuminating both the qualitative and quantitative aspects of agents' roles in achieving or preventing outcomes.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Responsibility-aware Strategic Reasoning in Probabilistic Multi-Agent Systems [1.7819574476785418]
Responsibility plays a key role in the development and deployment of trustworthy autonomous systems.
We introduce the logic PATL+R, a variant of Probabilistic Alternating-time Temporal Logic.
We present an approach to synthesise joint strategies that satisfy an outcome specified in PATL+R.
arXiv Detail & Related papers (2024-10-31T18:49:12Z) - Responsibility in a Multi-Value Strategic Setting [12.143925288392166]
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
arXiv Detail & Related papers (2024-10-22T17:51:13Z) - Agent-Oriented Planning in Multi-Agent Systems [54.429028104022066]
We propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process.
We integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process.
arXiv Detail & Related papers (2024-10-03T04:07:51Z) - Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability [13.343937277604892]
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems.
This work proposes a coherent and ethically acceptable responsibility attribution framework for all stakeholders.
arXiv Detail & Related papers (2024-04-25T18:11:03Z) - Neural Interaction Energy for Multi-Agent Trajectory Prediction [55.098754835213995]
We introduce a framework called Multi-Agent Trajectory prediction via neural interaction Energy (MATE)
MATE assesses the interactive motion of agents by employing neural interaction energy.
To bolster temporal stability, we introduce two constraints: inter-agent interaction constraint and intra-agent motion constraint.
arXiv Detail & Related papers (2024-04-25T12:47:47Z) - Anticipating Responsibility in Multiagent Planning [9.686474898346392]
Responsibility anticipation is a process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider.
arXiv Detail & Related papers (2023-07-31T13:58:49Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Policy Diagnosis via Measuring Role Diversity in Cooperative Multi-agent
RL [107.58821842920393]
We quantify the agent's behavior difference and build its relationship with the policy performance via bf Role Diversity
We find that the error bound in MARL can be decomposed into three parts that have a strong relation to the role diversity.
The decomposed factors can significantly impact policy optimization on three popular directions.
arXiv Detail & Related papers (2022-06-01T04:58:52Z) - Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems [0.0]
We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
arXiv Detail & Related papers (2020-07-09T13:19:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.