CompetEvo: Towards Morphological Evolution from Competition
- URL: http://arxiv.org/abs/2405.18300v1
- Date: Tue, 28 May 2024 15:53:02 GMT
- Title: CompetEvo: Towards Morphological Evolution from Competition
- Authors: Kangyao Huang, Di Guo, Xinyu Zhang, Xiangyang Ji, Huaping Liu,
- Abstract summary: We propose competitive evolution (CompetEvo), which co-evolves agents' designs and tactics in confrontation.
The results reveal that our method enables agents to evolve a more suitable design and strategy for fighting.
- Score: 60.69068909395984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training an agent to adapt to specific tasks through co-optimization of morphology and control has widely attracted attention. However, whether there exists an optimal configuration and tactics for agents in a multiagent competition scenario is still an issue that is challenging to definitively conclude. In this context, we propose competitive evolution (CompetEvo), which co-evolves agents' designs and tactics in confrontation. We build arenas consisting of three animals and their evolved derivatives, placing agents with different morphologies in direct competition with each other. The results reveal that our method enables agents to evolve a more suitable design and strategy for fighting compared to fixed-morph agents, allowing them to obtain advantages in combat scenarios. Moreover, we demonstrate the amazing and impressive behaviors that emerge when confrontations are conducted under asymmetrical morphs.
Related papers
- ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Mimicking To Dominate: Imitation Learning Strategies for Success in
Multiagent Competitive Games [13.060023718506917]
We develop a new multi-agent imitation learning model for predicting next moves of the opponents.
We also present a new multi-agent reinforcement learning algorithm that combines our imitation learning model and policy training into one single training process.
Experimental results show that our approach achieves superior performance compared to existing state-of-the-art multi-agent RL algorithms.
arXiv Detail & Related papers (2023-08-20T07:30:13Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - Cooperation and Competition: Flocking with Evolutionary Multi-Agent
Reinforcement Learning [0.0]
We propose Evolutionary Multi-Agent Reinforcement Learning (EMARL) in flocking tasks.
EMARL combines cooperation and competition with little prior knowledge.
We show that EMARL significantly outperforms the full competition or cooperation methods.
arXiv Detail & Related papers (2022-09-10T15:35:20Z) - Safe adaptation in multiagent competition [48.02377041620857]
In multiagent competitive scenarios, ego-agents may have to adapt to new opponents with previously unseen behaviors.
As the ego-agent updates its own behavior to exploit the opponent, its own behavior could become more exploitable.
We develop a safe adaptation approach in which the ego-agent is trained against a regularized opponent model.
arXiv Detail & Related papers (2022-03-14T23:53:59Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams [0.0]
We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
arXiv Detail & Related papers (2020-07-06T22:35:56Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.