Safe adaptation in multiagent competition
- URL: http://arxiv.org/abs/2203.07562v1
- Date: Mon, 14 Mar 2022 23:53:59 GMT
- Title: Safe adaptation in multiagent competition
- Authors: Macheng Shen and Jonathan P. How
- Abstract summary: In multiagent competitive scenarios, ego-agents may have to adapt to new opponents with previously unseen behaviors.
As the ego-agent updates its own behavior to exploit the opponent, its own behavior could become more exploitable.
We develop a safe adaptation approach in which the ego-agent is trained against a regularized opponent model.
- Score: 48.02377041620857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving the capability of adapting to ever-changing environments is a
critical step towards building fully autonomous robots that operate safely in
complicated scenarios. In multiagent competitive scenarios, agents may have to
adapt to new opponents with previously unseen behaviors by learning from the
interaction experiences between the ego-agent and the opponent. However, this
adaptation is susceptible to opponent exploitation. As the ego-agent updates
its own behavior to exploit the opponent, its own behavior could become more
exploitable as a result of overfitting to this specific opponent's behavior. To
overcome this difficulty, we developed a safe adaptation approach in which the
ego-agent is trained against a regularized opponent model, which effectively
avoids overfitting and consequently improves the robustness of the ego-agent's
policy. We evaluated our approach in the Mujoco domain with two competing
agents. The experiment results suggest that our approach effectively achieves
both adaptation to the specific opponent that the ego-agent is interacting with
and maintaining low exploitability to other possible opponent exploitation.
Related papers
- CompetEvo: Towards Morphological Evolution from Competition [60.69068909395984]
We propose competitive evolution (CompetEvo), which co-evolves agents' designs and tactics in confrontation.
The results reveal that our method enables agents to evolve a more suitable design and strategy for fighting.
arXiv Detail & Related papers (2024-05-28T15:53:02Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Goal-Conditioned Reinforcement Learning in the Presence of an Adversary [0.0]
Reinforcement learning has seen increasing applications in real-world contexts over the past few years.
A common approach to combat this is to train agents in the presence of an adversary.
An adversary acts to destabilise the agent, which learns a more robust policy and can better handle realistic conditions.
We present DigitFlip and CLEVR-Play, two novel goal-conditioned environments that support acting against an adversary.
arXiv Detail & Related papers (2022-11-13T15:40:01Z) - Game-theoretic Objective Space Planning [4.989480853499916]
Understanding intent of other agents is crucial to deploying autonomous systems in adversarial multi-agent environments.
Current approaches either oversimplify the discretization of the action space of agents or fail to recognize the long-term effect of actions and become myopic.
We propose a novel dimension reduction method that encapsulates diverse agent behaviors while conserving the continuity of agent actions.
arXiv Detail & Related papers (2022-09-16T07:35:20Z) - Exploring the Impact of Tunable Agents in Sequential Social Dilemmas [0.0]
We leverage multi-objective reinforcement learning to create tunable agents.
We apply this technique to sequential social dilemmas.
We demonstrate that the tunable agents framework allows easy adaption between cooperative and competitive behaviours.
arXiv Detail & Related papers (2021-01-28T12:44:31Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Learning to Model Opponent Learning [11.61673411387596]
Multi-Agent Reinforcement Learning (MARL) considers settings in which a set of coexisting agents interact with one another and their environment.
This poses a great challenge for value function-based algorithms whose convergence usually relies on the assumption of a stationary environment.
We develop a novel approach to modelling an opponent's learning dynamics which we term Learning to Model Opponent Learning (LeMOL)
arXiv Detail & Related papers (2020-06-06T17:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.