Causality, Responsibility and Blame in Team Plans
- URL: http://arxiv.org/abs/2005.10297v1
- Date: Wed, 20 May 2020 18:21:19 GMT
- Title: Causality, Responsibility and Blame in Team Plans
- Authors: Natasha Alechina, Joseph Y. Halpern, and Brian Logan
- Abstract summary: We show how team plans can be represented in terms of structural equations.
We then apply the definitions of causality introduced by Halpern [2015] and degree of responsibility and blame introduced by Chockler and Halpern [2004] to determine the agent(s) who caused the failure.
- Score: 24.99901958667372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many objectives can be achieved (or may be achieved more effectively) only by
a group of agents executing a team plan. If a team plan fails, it is often of
interest to determine what caused the failure, the degree of responsibility of
each agent for the failure, and the degree of blame attached to each agent. We
show how team plans can be represented in terms of structural equations, and
then apply the definitions of causality introduced by Halpern [2015] and degree
of responsibility and blame introduced by Chockler and Halpern [2004] to
determine the agent(s) who caused the failure and what their degree of
responsibility/blame is. We also prove new results on the complexity of
computing causality and degree of responsibility and blame, showing that they
can be determined in polynomial time for many team plans of interest.
Related papers
- KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - On Catastrophic Inheritance of Large Foundation Models [56.169678293678885]
Large foundation models (LFMs) are claiming incredible performances. Yet great concerns have been raised about their mythic and uninterpreted potentials.
We propose to identify a neglected issue deeply rooted in LFMs: Catastrophic Inheritance.
We discuss the challenges behind this issue and propose UIM, a framework to understand the catastrophic inheritance of LFMs from both pre-training and downstream adaptation.
arXiv Detail & Related papers (2024-02-02T21:21:55Z) - Responsibility in Extensive Form Games [1.4104545468525629]
Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
arXiv Detail & Related papers (2023-12-12T10:41:17Z) - Anticipating Responsibility in Multiagent Planning [9.686474898346392]
Responsibility anticipation is a process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider.
arXiv Detail & Related papers (2023-07-31T13:58:49Z) - Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans [13.637799815698559]
We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action.
We show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
arXiv Detail & Related papers (2023-07-07T03:05:34Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Fault-Tolerant Offline Multi-Agent Path Planning [5.025654873456756]
We study a novel graph path planning problem for multiple agents that may crash at runtime, and block part of the workspace.
In our setting, agents can detect neighboring crashed agents, and change followed paths at runtime. The objective is then to prepare a set of paths and switching rules for each agent, ensuring that all correct agents reach their destinations without collisions or deadlocks.
arXiv Detail & Related papers (2022-11-25T05:58:32Z) - Formalizing the Problem of Side Effect Regularization [81.97441214404247]
We propose a formal criterion for side effect regularization via the assistance game framework.
In these games, the agent solves a partially observable Markov decision process.
We show that this POMDP is solved by trading off the proxy reward with the agent's ability to achieve a range of future tasks.
arXiv Detail & Related papers (2022-06-23T16:36:13Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Collaborative Human-Agent Planning for Resilience [5.2123460114614435]
We investigate whether people can collaborate with agents by providing their knowledge to an agent using linear temporal logic (LTL) at run-time.
We present 24 participants with baseline plans for situations in which a planner had limitations, and asked the participants for workarounds for these limitations.
Results show that participants' constraints improved the expected return of the plans by 10%.
arXiv Detail & Related papers (2021-04-29T03:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.