Causal Explanations for Sequential Decision-Making in Multi-Agent
Systems
- URL: http://arxiv.org/abs/2302.10809v4
- Date: Wed, 14 Feb 2024 18:28:52 GMT
- Title: Causal Explanations for Sequential Decision-Making in Multi-Agent
Systems
- Authors: Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen,
Stefano V. Albrecht
- Abstract summary: CEMA is a framework for creating causal natural language explanations of an agent's decisions in sequential multi-agent systems.
We show CEMA correctly identifies the causes behind the agent's decisions, even when a large number of other agents is present.
We show via a user study that CEMA's explanations have a positive effect on participants' trust in autonomous vehicles.
- Score: 31.674391914683888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present CEMA: Causal Explanations in Multi-Agent systems; a framework for
creating causal natural language explanations of an agent's decisions in
dynamic sequential multi-agent systems to build more trustworthy autonomous
agents. Unlike prior work that assumes a fixed causal structure, CEMA only
requires a probabilistic model for forward-simulating the state of the system.
Using such a model, CEMA simulates counterfactual worlds that identify the
salient causes behind the agent's decisions. We evaluate CEMA on the task of
motion planning for autonomous driving and test it in diverse simulated
scenarios. We show that CEMA correctly and robustly identifies the causes
behind the agent's decisions, even when a large number of other agents is
present, and show via a user study that CEMA's explanations have a positive
effect on participants' trust in autonomous vehicles and are rated as high as
high-quality baseline explanations elicited from other participants. We release
the collected explanations with annotations as the HEADD dataset.
Related papers
- Extending Structural Causal Models for Use in Autonomous Embodied Systems [5.309950889075669]
We present a case study in which we describe a module-based autonomous driving system comprised of structural causal models (SCMs)
The first of these is SCM contexts, with the remainder being three new variable categories -- two of which are based upon functional programming monads.
We conclude by presenting an example application of the causal capabilities of the autonomous driving system.
arXiv Detail & Related papers (2024-06-03T14:47:05Z) - BET: Explaining Deep Reinforcement Learning through The Error-Prone
Decisions [7.139669387895207]
We propose a novel self-interpretable structure, named Backbone Extract Tree (BET), to better explain the agent's behavior.
At a high level, BET hypothesizes that states in which the agent consistently executes uniform decisions exhibit a reduced propensity for errors.
We show BET's superiority over existing self-interpretable models in terms of explanation fidelity.
arXiv Detail & Related papers (2024-01-14T11:45:05Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - On Imperfect Recall in Multi-Agent Influence Diagrams [57.21088266396761]
Multi-agent influence diagrams (MAIDs) are a popular game-theoretic model based on Bayesian networks.
We show how to solve MAIDs with forgetful and absent-minded agents using mixed policies and two types of correlated equilibrium.
We also describe applications of MAIDs to Markov games and team situations, where imperfect recall is often unavoidable.
arXiv Detail & Related papers (2023-07-11T07:08:34Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Discovering Agents [10.751378433775606]
Causal models of agents have been used to analyse the safety aspects of machine learning systems.
This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different way.
arXiv Detail & Related papers (2022-08-17T15:13:25Z) - Learning Causal Models of Autonomous Agents using Interventions [11.351235628684252]
We extend the analysis of an agent assessment module that lets an AI system execute high-level instruction sequences in simulators.
We show that such a primitive query-response capability is sufficient to efficiently derive a user-interpretable causal model of the system.
arXiv Detail & Related papers (2021-08-21T21:33:26Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - A Formal Framework for Reasoning about Agents' Independence in
Self-organizing Multi-agent Systems [0.7734726150561086]
This paper proposes a logic-based framework of self-organizing multi-agent systems.
We show that the computational complexity of verifying such a system remains close to the domain of standard ATL.
We also show how we can use our framework to model a constraint satisfaction problem.
arXiv Detail & Related papers (2021-05-17T07:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.