Causality, Responsibility and Blame in Team Plans
- URL: http://arxiv.org/abs/2005.10297v1
- Date: Wed, 20 May 2020 18:21:19 GMT
- Title: Causality, Responsibility and Blame in Team Plans
- Authors: Natasha Alechina, Joseph Y. Halpern, and Brian Logan
- Abstract summary: We show how team plans can be represented in terms of structural equations.
We then apply the definitions of causality introduced by Halpern [2015] and degree of responsibility and blame introduced by Chockler and Halpern [2004] to determine the agent(s) who caused the failure.
- Score: 24.99901958667372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many objectives can be achieved (or may be achieved more effectively) only by
a group of agents executing a team plan. If a team plan fails, it is often of
interest to determine what caused the failure, the degree of responsibility of
each agent for the failure, and the degree of blame attached to each agent. We
show how team plans can be represented in terms of structural equations, and
then apply the definitions of causality introduced by Halpern [2015] and degree
of responsibility and blame introduced by Chockler and Halpern [2004] to
determine the agent(s) who caused the failure and what their degree of
responsibility/blame is. We also prove new results on the complexity of
computing causality and degree of responsibility and blame, showing that they
can be determined in polynomial time for many team plans of interest.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Measuring Responsibility in Multi-Agent Systems [1.5883812630616518]
We introduce a family of quantitative measures of responsibility in multi-agent planning.
We ascribe responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics.
An entropy-based measurement of responsibility is the first to capture the causal responsibility properties of outcomes over time.
arXiv Detail & Related papers (2024-10-31T18:45:34Z) - Responsibility in a Multi-Value Strategic Setting [12.143925288392166]
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
arXiv Detail & Related papers (2024-10-22T17:51:13Z) - Agent-Oriented Planning in Multi-Agent Systems [54.429028104022066]
We propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process.
We integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process.
arXiv Detail & Related papers (2024-10-03T04:07:51Z) - On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents [58.79302663733703]
Large language model-based multi-agent systems have shown great abilities across various tasks due to the collaboration of expert agents.<n>The impact of clumsy or even malicious agents--those who frequently make errors in their tasks--on the overall performance of the system remains underexplored.<n>This paper investigates what is the resilience of various system structures under faulty agents on different downstream tasks.
arXiv Detail & Related papers (2024-08-02T03:25:20Z) - KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - On Catastrophic Inheritance of Large Foundation Models [51.41727422011327]
Large foundation models (LFMs) are claiming incredible performances. Yet great concerns have been raised about their mythic and uninterpreted potentials.
We propose to identify a neglected issue deeply rooted in LFMs: Catastrophic Inheritance.
We discuss the challenges behind this issue and propose UIM, a framework to understand the catastrophic inheritance of LFMs from both pre-training and downstream adaptation.
arXiv Detail & Related papers (2024-02-02T21:21:55Z) - Unravelling Responsibility for AI [0.8472029664133528]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.<n>This paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology.<n>It unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Anticipating Responsibility in Multiagent Planning [9.686474898346392]
Responsibility anticipation is a process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider.
arXiv Detail & Related papers (2023-07-31T13:58:49Z) - Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans [13.637799815698559]
We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action.
We show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
arXiv Detail & Related papers (2023-07-07T03:05:34Z) - Formalizing the Problem of Side Effect Regularization [81.97441214404247]
We propose a formal criterion for side effect regularization via the assistance game framework.
In these games, the agent solves a partially observable Markov decision process.
We show that this POMDP is solved by trading off the proxy reward with the agent's ability to achieve a range of future tasks.
arXiv Detail & Related papers (2022-06-23T16:36:13Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Collaborative Human-Agent Planning for Resilience [5.2123460114614435]
We investigate whether people can collaborate with agents by providing their knowledge to an agent using linear temporal logic (LTL) at run-time.
We present 24 participants with baseline plans for situations in which a planner had limitations, and asked the participants for workarounds for these limitations.
Results show that participants' constraints improved the expected return of the plans by 10%.
arXiv Detail & Related papers (2021-04-29T03:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.