Toward Policy Explanations for Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2204.12568v1
- Date: Tue, 26 Apr 2022 20:07:08 GMT
- Title: Toward Policy Explanations for Multi-Agent Reinforcement Learning
- Authors: Kayla Boggess, Sarit Kraus, and Lu Feng
- Abstract summary: We present novel methods to generate two types of policy explanations for MARL.
Experimental results on three MARL domains demonstrate the scalability of our methods.
A user study shows that the generated explanations significantly improve user performance and increase subjective ratings on metrics such as user satisfaction.
- Score: 18.33682005623418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in multi-agent reinforcement learning(MARL) enable sequential
decision making for a range of exciting multi-agent applications such as
cooperative AI and autonomous driving. Explaining agent decisions are crucial
for improving system transparency, increasing user satisfaction, and
facilitating human-agent collaboration. However, existing works on explainable
reinforcement learning mostly focus on the single-agent setting and are not
suitable for addressing challenges posed by multi-agent environments. We
present novel methods to generate two types of policy explanations for MARL:
(i) policy summarization about the agent cooperation and task sequence, and
(ii) language explanations to answer queries about agent behavior. Experimental
results on three MARL domains demonstrate the scalability of our methods. A
user study shows that the generated explanations significantly improve user
performance and increase subjective ratings on metrics such as user
satisfaction.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation [49.27250832754313]
We present AgentCOT, a llm-based autonomous agent framework.
At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence.
We introduce two new strategies to enhance the performance of AgentCOT.
arXiv Detail & Related papers (2024-09-19T02:20:06Z) - Learning to Use Tools via Cooperative and Interactive Agents [58.77710337157665]
Tool learning empowers large language models (LLMs) as agents to use external tools and extend their utility.
We propose ConAgents, a Cooperative and interactive Agents framework, which coordinates three specialized agents for tool selection, tool execution, and action calibration separately.
Our experiments on three datasets show that the LLMs, when equipped with ConAgents, outperform baselines with substantial improvement.
arXiv Detail & Related papers (2024-03-05T15:08:16Z) - On Diagnostics for Understanding Agent Training Behaviour in Cooperative
MARL [5.124364759305485]
We argue that relying solely on the empirical returns may obscure crucial insights into agent behaviour.
In this paper, we explore the application of explainable AI (XAI) tools to gain profound insights into agent behaviour.
arXiv Detail & Related papers (2023-12-13T19:10:10Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Beyond Rewards: a Hierarchical Perspective on Offline Multiagent
Behavioral Analysis [14.656957226255628]
We introduce a model-agnostic method for discovery of behavior clusters in multiagent domains.
Our framework makes no assumption about agents' underlying learning algorithms, does not require access to their latent states or models, and can be trained using entirely offline observational data.
arXiv Detail & Related papers (2022-06-17T23:07:33Z) - Policy Diagnosis via Measuring Role Diversity in Cooperative Multi-agent
RL [107.58821842920393]
We quantify the agent's behavior difference and build its relationship with the policy performance via bf Role Diversity
We find that the error bound in MARL can be decomposed into three parts that have a strong relation to the role diversity.
The decomposed factors can significantly impact policy optimization on three popular directions.
arXiv Detail & Related papers (2022-06-01T04:58:52Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - "I Don't Think So": Disagreement-Based Policy Summaries for Comparing
Agents [2.6270468656705765]
We propose a novel method for generating contrastive summaries that highlight the differences between agent's policies.
Our results show that the novel disagreement-based summaries lead to improved user performance compared to summaries generated using HIGHLIGHTS.
arXiv Detail & Related papers (2021-02-05T09:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.