EMOTE: An Explainable architecture for Modelling the Other Through
Empathy
- URL: http://arxiv.org/abs/2306.00295v1
- Date: Thu, 1 Jun 2023 02:27:08 GMT
- Title: EMOTE: An Explainable architecture for Modelling the Other Through
Empathy
- Authors: Manisha Senadeera, Thommen Karimpanal George, Sunil Gupta, Stephan
Jacobs, Santu Rana
- Abstract summary: We design a simple architecture to model another agent's action-value function.
We learn an "Imagination Network" to transform the other agent's observed state.
This produces a human-interpretable "empathetic state" which, when presented to the learning agent, produces behaviours that mimic the other agent.
- Score: 26.85666453984719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We can usually assume others have goals analogous to our own. This assumption
can also, at times, be applied to multi-agent games - e.g. Agent 1's attraction
to green pellets is analogous to Agent 2's attraction to red pellets. This
"analogy" assumption is tied closely to the cognitive process known as empathy.
Inspired by empathy, we design a simple and explainable architecture to model
another agent's action-value function. This involves learning an "Imagination
Network" to transform the other agent's observed state in order to produce a
human-interpretable "empathetic state" which, when presented to the learning
agent, produces behaviours that mimic the other agent. Our approach is
applicable to multi-agent scenarios consisting of a single learning agent and
other (independent) agents acting according to fixed policies. This
architecture is particularly beneficial for (but not limited to) algorithms
using a composite value or reward function. We show our method produces better
performance in multi-agent games, where it robustly estimates the other's model
in different environment configurations. Additionally, we show that the
empathetic states are human interpretable, and thus verifiable.
Related papers
- AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Attention Graph for Multi-Robot Social Navigation with Deep
Reinforcement Learning [0.0]
We present MultiSoc, a new method for learning multi-agent socially aware navigation strategies using deep reinforcement learning (RL)
Inspired by recent works on multi-agent deep RL, our method leverages graph-based representation of agent interactions, combining the positions and fields of view of entities (pedestrians and agents)
Our method learns faster than social navigation deep RL mono-agent techniques, and enables efficient multi-agent implicit coordination in challenging crowd navigation with multiple heterogeneous humans.
arXiv Detail & Related papers (2024-01-31T15:24:13Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent
Forecasting [25.151713845738335]
We propose a new Transformer, AgentFormer, that jointly models the time and social dimensions.
Based on AgentFormer, we propose a multi-agent trajectory prediction model that can attend to features of any agent at any previous timestep.
Our method significantly improves the state of the art on well-established pedestrian and autonomous driving datasets.
arXiv Detail & Related papers (2021-03-25T17:59:01Z) - Deep Interactive Bayesian Reinforcement Learning via Meta-Learning [63.96201773395921]
The optimal adaptive behaviour under uncertainty over the other agents' strategies can be computed using the Interactive Bayesian Reinforcement Learning framework.
We propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior.
We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.
arXiv Detail & Related papers (2021-01-11T13:25:13Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z) - What can I do here? A Theory of Affordances in Reinforcement Learning [65.70524105802156]
We develop a theory of affordances for agents who learn and plan in Markov Decision Processes.
Affordances play a dual role in this case, by reducing the number of actions available in any given situation.
We propose an approach to learn affordances and use it to estimate transition models that are simpler and generalize better.
arXiv Detail & Related papers (2020-06-26T16:34:53Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z) - Variational Autoencoders for Opponent Modeling in Multi-Agent Systems [9.405879323049659]
Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment.
In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies.
Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system.
arXiv Detail & Related papers (2020-01-29T13:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.