Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning
- URL: http://arxiv.org/abs/2302.10238v1
- Date: Mon, 20 Feb 2023 19:07:42 GMT
- Title: Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning
- Authors: Haochen Wu, Pedro Sequeira, David V. Pynadath
- Abstract summary: We propose a novel approach to Multiagent Inverse Reinforcement Learning (MIRL)
MIRL aims to infer the reward functions guiding the behavior of each individual given trajectories of a team's behavior during task performance.
We evaluate our approach in a simulated 2-player search-and-rescue operation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To understand how people interact with each other in collaborative settings,
especially in situations where individuals know little about their teammates,
Multiagent Inverse Reinforcement Learning (MIRL) aims to infer the reward
functions guiding the behavior of each individual given trajectories of a
team's behavior during task performance. Unlike current MIRL approaches, team
members \emph{are not} assumed to know each other's goals a priori, rather they
collaborate by adapting to the goals of others perceived by observing their
behavior, all while jointly performing a task. To address this problem, we
propose a novel approach to MIRL via Theory of Mind (MIRL-ToM). For each agent,
we first use ToM reasoning to estimate a posterior distribution over baseline
reward profiles given their demonstrated behavior. We then perform MIRL via
decentralized equilibrium by employing single-agent Maximum Entropy IRL to
infer a reward function for each agent, where we simulate the behavior of other
teammates according to the time-varying distribution over profiles. We evaluate
our approach in a simulated 2-player search-and-rescue operation where the goal
of the agents, playing different roles, is to search for and evacuate victims
in the environment. Results show that the choice of baseline profiles is
paramount to the recovery of ground-truth rewards, and MIRL-ToM is able to
recover the rewards used by agents interacting with either known and unknown
teammates.
Related papers
- DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Emergent Behaviors in Multi-Agent Target Acquisition [0.0]
We simulate a Multi-Agent System (MAS) using Reinforcement Learning (RL) in a pursuit-evasion game.
We create different adversarial scenarios by replacing RL-trained pursuers' policies with two distinct (non-RL) analytical strategies.
The novelty of our approach entails the creation of an influential feature set that reveals underlying data regularities.
arXiv Detail & Related papers (2022-12-15T15:20:58Z) - ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward [29.737986509769808]
We propose a self-supervised intrinsic reward ELIGN - expectation alignment.
Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations.
We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries.
arXiv Detail & Related papers (2022-10-09T22:24:44Z) - Cooperative Online Learning in Stochastic and Adversarial MDPs [50.62439652257712]
We study cooperative online learning in and adversarial Markov decision process (MDP)
In each episode, $m$ agents interact with an MDP simultaneously and share information in order to minimize their individual regret.
We are the first to consider cooperative reinforcement learning (RL) with either non-fresh randomness or in adversarial MDPs.
arXiv Detail & Related papers (2022-01-31T12:32:11Z) - ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind [18.85252946546942]
Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
arXiv Detail & Related papers (2021-10-15T18:29:55Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - AoI-Aware Resource Allocation for Platoon-Based C-V2X Networks via
Multi-Agent Multi-Task Reinforcement Learning [22.890835786710316]
This paper investigates the problem of age of information (AoI) aware radio resource management for a platooning system.
Multiple autonomous platoons exploit the cellular wireless vehicle-to-everything (C-V2X) communication technology to disseminate the cooperative awareness messages (CAMs) to their followers.
We exploit a distributed resource allocation framework based on multi-agent reinforcement learning (MARL), where each platoon leader (PL) acts as an agent and interacts with the environment to learn its optimal policy.
arXiv Detail & Related papers (2021-05-10T08:39:56Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Maximizing Information Gain in Partially Observable Environments via
Prediction Reward [64.24528565312463]
This paper tackles the challenge of using belief-based rewards for a deep RL agent.
We derive the exact error between negative entropy and the expected prediction reward.
This insight provides theoretical motivation for several fields using prediction rewards.
arXiv Detail & Related papers (2020-05-11T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.