Causal Explanations for Sequential Decision-Making in Multi-Agent
Systems
- URL: http://arxiv.org/abs/2302.10809v4
- Date: Wed, 14 Feb 2024 18:28:52 GMT
- Title: Causal Explanations for Sequential Decision-Making in Multi-Agent
Systems
- Authors: Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen,
Stefano V. Albrecht
- Abstract summary: CEMA is a framework for creating causal natural language explanations of an agent's decisions in sequential multi-agent systems.
We show CEMA correctly identifies the causes behind the agent's decisions, even when a large number of other agents is present.
We show via a user study that CEMA's explanations have a positive effect on participants' trust in autonomous vehicles.
- Score: 31.674391914683888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present CEMA: Causal Explanations in Multi-Agent systems; a framework for
creating causal natural language explanations of an agent's decisions in
dynamic sequential multi-agent systems to build more trustworthy autonomous
agents. Unlike prior work that assumes a fixed causal structure, CEMA only
requires a probabilistic model for forward-simulating the state of the system.
Using such a model, CEMA simulates counterfactual worlds that identify the
salient causes behind the agent's decisions. We evaluate CEMA on the task of
motion planning for autonomous driving and test it in diverse simulated
scenarios. We show that CEMA correctly and robustly identifies the causes
behind the agent's decisions, even when a large number of other agents is
present, and show via a user study that CEMA's explanations have a positive
effect on participants' trust in autonomous vehicles and are rated as high as
high-quality baseline explanations elicited from other participants. We release
the collected explanations with annotations as the HEADD dataset.
Related papers
- Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems [58.67035332062508]
Linguistic fuzzy information evolution is crucial in understanding information exchange among agents.
Different agent weights may lead to different convergence results in the classic DeGroot model.
This paper proposes three new models of linguistic fuzzy information dynamics.
arXiv Detail & Related papers (2024-10-19T18:15:24Z) - Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z) - On the Resilience of Multi-Agent Systems with Malicious Agents [58.79302663733702]
This paper investigates what is the resilience of multi-agent system structures under malicious agents.
We devise two methods, AutoTransform and AutoInject, to transform any agent into a malicious one.
We show that two defense methods, introducing a mechanism for each agent to challenge others' outputs, or an additional agent to review and correct messages, can enhance system resilience.
arXiv Detail & Related papers (2024-08-02T03:25:20Z) - BET: Explaining Deep Reinforcement Learning through The Error-Prone
Decisions [7.139669387895207]
We propose a novel self-interpretable structure, named Backbone Extract Tree (BET), to better explain the agent's behavior.
At a high level, BET hypothesizes that states in which the agent consistently executes uniform decisions exhibit a reduced propensity for errors.
We show BET's superiority over existing self-interpretable models in terms of explanation fidelity.
arXiv Detail & Related papers (2024-01-14T11:45:05Z) - On Imperfect Recall in Multi-Agent Influence Diagrams [57.21088266396761]
Multi-agent influence diagrams (MAIDs) are a popular game-theoretic model based on Bayesian networks.
We show how to solve MAIDs with forgetful and absent-minded agents using mixed policies and two types of correlated equilibrium.
We also describe applications of MAIDs to Markov games and team situations, where imperfect recall is often unavoidable.
arXiv Detail & Related papers (2023-07-11T07:08:34Z) - Discovering Agents [10.751378433775606]
Causal models of agents have been used to analyse the safety aspects of machine learning systems.
This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different way.
arXiv Detail & Related papers (2022-08-17T15:13:25Z) - Differential Assessment of Black-Box AI Agents [29.98710357871698]
We propose a novel approach to differentially assess black-box AI agents that have drifted from their previously known models.
We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy.
Empirical evaluation shows that our approach is much more efficient than re-learning the agent model from scratch.
arXiv Detail & Related papers (2022-03-24T17:48:58Z) - Learning Causal Models of Autonomous Agents using Interventions [11.351235628684252]
We extend the analysis of an agent assessment module that lets an AI system execute high-level instruction sequences in simulators.
We show that such a primitive query-response capability is sufficient to efficiently derive a user-interpretable causal model of the system.
arXiv Detail & Related papers (2021-08-21T21:33:26Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - A Formal Framework for Reasoning about Agents' Independence in
Self-organizing Multi-agent Systems [0.7734726150561086]
This paper proposes a logic-based framework of self-organizing multi-agent systems.
We show that the computational complexity of verifying such a system remains close to the domain of standard ATL.
We also show how we can use our framework to model a constraint satisfaction problem.
arXiv Detail & Related papers (2021-05-17T07:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.