Diversifying Agent's Behaviors in Interactive Decision Models
- URL: http://arxiv.org/abs/2203.03068v1
- Date: Sun, 6 Mar 2022 23:05:00 GMT
- Title: Diversifying Agent's Behaviors in Interactive Decision Models
- Authors: Yinghui Pan, Hanyi Zhang, Yifeng Zeng, Biyang Ma, Jing Tang and Zhong
Ming
- Abstract summary: Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
- Score: 11.125175635860169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modelling other agents' behaviors plays an important role in decision models
for interactions among multiple agents. To optimise its own decisions, a
subject agent needs to model what other agents act simultaneously in an
uncertain environment. However, modelling insufficiency occurs when the agents
are competitive and the subject agent can not get full knowledge about other
agents. Even when the agents are collaborative, they may not share their true
behaviors due to their privacy concerns. In this article, we investigate into
diversifying behaviors of other agents in the subject agent's decision model
prior to their interactions. Starting with prior knowledge about other agents'
behaviors, we use a linear reduction technique to extract representative
behavioral features from the known behaviors. We subsequently generate their
new behaviors by expanding the features and propose two diversity measurements
to select top-K behaviors. We demonstrate the performance of the new techniques
in two well-studied problem domains. This research will contribute to
intelligent systems dealing with unknown unknowns in an open artificial
intelligence world.
Related papers
- Inverse Attention Agent for Multi-Agent System [6.196239958087161]
A major challenge for Multi-Agent Systems is enabling agents to adapt dynamically to diverse environments in which opponents and teammates may continually change.
We introduce Inverse Attention Agents that adopt concepts from the Theory of Mind, implemented algorithmically using an attention mechanism and trained in an end-to-end manner.
We demonstrate that the inverse attention network successfully infers the attention of other agents, and that this information improves agent performance.
arXiv Detail & Related papers (2024-10-29T06:59:11Z) - Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning [2.992602379681373]
We introduce an episodic future thinking (EFT) mechanism for a reinforcement learning (RL) agent.
We first develop a multi-character policy that captures diverse characters with an ensemble of heterogeneous policies.
Once the character is inferred, the agent predicts the upcoming actions of target agents and simulates the potential future scenario.
arXiv Detail & Related papers (2024-10-22T19:12:42Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Deep Interactive Bayesian Reinforcement Learning via Meta-Learning [63.96201773395921]
The optimal adaptive behaviour under uncertainty over the other agents' strategies can be computed using the Interactive Bayesian Reinforcement Learning framework.
We propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior.
We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.
arXiv Detail & Related papers (2021-01-11T13:25:13Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Domain-independent generation and classification of behavior traces [18.086782548507855]
CABBOT is a learning technique that allows the agent to perform on-line classification of the type of planning agent whose behavior is observing.
We present experiments in several (both financial and non-financial) domains with promising results.
arXiv Detail & Related papers (2020-11-03T16:58:54Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Variational Autoencoders for Opponent Modeling in Multi-Agent Systems [9.405879323049659]
Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment.
In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies.
Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system.
arXiv Detail & Related papers (2020-01-29T13:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.