Adversarial Coevolutionary Illumination with Generational Adversarial MAP-Elites
- URL: http://arxiv.org/abs/2505.06617v3
- Date: Mon, 06 Oct 2025 23:01:55 GMT
- Title: Adversarial Coevolutionary Illumination with Generational Adversarial MAP-Elites
- Authors: Timothée Anne, Noah Syrkis, Meriem Elhosni, Florian Turati, Franck Legendre, Alain Jaquier, Sebastian Risi,
- Abstract summary: Quality-Diversity (QD) algorithms seek to discover diverse, high-performing solutions across a behavior space.<n>We present Generational Adversarial MAP-Elites (GAME), a coevolutionary QD algorithm that evolves both sides by alternating which side is evolved at each generation.
- Score: 3.4524024382493774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quality-Diversity (QD) algorithms seek to discover diverse, high-performing solutions across a behavior space, contrasting with conventional optimization methods that target a single optimum. Adversarial problems present unique challenges for QD approaches, as the competing nature of opposing sides creates interdependencies that complicate the evolution process. Existing QD methods applied to such scenarios typically fix one side, constraining behavioral diversity. We present Generational Adversarial MAP-Elites (GAME), a coevolutionary QD algorithm that evolves both sides by alternating which side is evolved at each generation. By integrating a vision embedding model, our approach eliminates the need for domain-specific behavior descriptors and instead operates on video. We validate GAME across three distinct adversarial domains: a multi-agent battle game, a soft-robot wrestling environment, and a deck building game. Our experiments reveal several evolutionary phenomena, including arms-race-like dynamics, enhanced novelty through generational extinction, and the preservation of neutral mutations as crucial stepping stones toward the highest performance. While GAME successfully illuminates all adversarial problems, its capacity for truly open-ended discovery remains constrained by the finite nature of the underlying search spaces. These findings establish GAME's broad applicability while highlighting opportunities for future research into open-ended adversarial coevolution.
Related papers
- EvoX: Meta-Evolution for Automated Discovery [115.89434419482797]
EvoX is an adaptive evolution method that optimize its own evolution process.<n>It continuously updates how prior solutions are selected and varied based on progress.<n>It outperforms existing AI-driven evolutionary methods including AlphaEvolve, OpenEvolve, GEPA, and ShinkaEvolve on the majority of tasks.
arXiv Detail & Related papers (2026-02-26T18:54:41Z) - Discovering Multiagent Learning Algorithms with Large Language Models [8.649235365712004]
We propose the use of AlphaEvolve, an evolutionary coding agent powered by large language models, to automatically discover new multiagent learning algorithms.<n>We demonstrate the generality of this framework by evolving novel variants for two distinct paradigms of game-theoretic learning.
arXiv Detail & Related papers (2026-02-18T22:41:00Z) - Discrete Gene Crossover Accelerates Solution Discovery in Quality-Diversity Algorithms [0.0]
Quality-Diversity algorithms aim to discover diverse, high-performing solutions across behavioral niches.<n>Existing mutation operators rely on gradual variation to solutions.<n>We propose a mutation operator which augments variation-based operators with discrete, gene-level crossover.
arXiv Detail & Related papers (2026-02-14T11:44:21Z) - Tournament Informed Adversarial Quality Diversity [35.69501554993759]
We propose two tournament-informed task selection methods to promote higher quality and diversity at each generation.<n>We evaluate the variants across three adversarial problems: Pong, a Cat-and-mouse game, and a Pursuers-and-evaders game.
arXiv Detail & Related papers (2026-01-27T12:55:23Z) - PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution [64.15555230987222]
PACEvolve is a framework designed to robustly govern the agent's context and search dynamics.<n>We demonstrate that PACEvolve provides a systematic path to consistent, long-horizon self-improvement.
arXiv Detail & Related papers (2026-01-15T18:25:23Z) - Experience-Guided Reflective Co-Evolution of Prompts and Heuristics for Automatic Algorithm Design [124.54166764570972]
Combinatorial optimization problems are traditionally tackled with handcrafted algorithms.<n>Recent progress has highlighted the potential of automatics design powered by large language models.<n>We propose the Experience-Evolution Reflective Co-Guided of Prompt and Heuristics (EvoPH) for automatic algorithm design.
arXiv Detail & Related papers (2025-09-29T09:24:09Z) - Quality Diversity Genetic Programming for Learning Scheduling Heuristics [36.015695494167495]
Quality-Diversity (QD) optimization is a multifaceted approach in evolutionary algorithms that aims to generate a set of solutions that are both high-performing and diverse.<n>This paper introduces a novel QD framework for dynamic scheduling problems.<n>We propose a map-building strategy that visualizes the solution by linking genotypes to their behaviors, enabling their representation on a QD map.
arXiv Detail & Related papers (2025-07-03T02:01:30Z) - Faster Optimal Coalition Structure Generation via Offline Coalition Selection and Graph-Based Search [61.08720171136229]
We present a novel algorithm, SMART, for the problem based on a hybridization of three innovative techniques.
Two of these techniques are based on dynamic programming, where we show a powerful connection between the coalitions selected for evaluation and the performance of the algorithms.
Our techniques bring a new way of approaching the problem and a new level of precision to the field.
arXiv Detail & Related papers (2024-07-22T23:24:03Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Efficient Quality-Diversity Optimization through Diverse Quality Species [3.428706362109921]
We show that a diverse population of solutions can be found without the limitation of needing an archive or defining the range of behaviors in advance.
We propose Diverse Quality Species (DQS) as an alternative to archive-based Quality-Diversity (QD) algorithms.
arXiv Detail & Related papers (2023-04-14T23:15:51Z) - Don't Bet on Luck Alone: Enhancing Behavioral Reproducibility of
Quality-Diversity Solutions in Uncertain Domains [2.639902239625779]
We introduce Archive Reproducibility Improvement Algorithm (ARIA)
ARIA is a plug-and-play approach that improves the quality of solutions present in an archive.
We show that our algorithm enhances the quality and descriptor space coverage of any given archive by at least 50%.
arXiv Detail & Related papers (2023-04-07T14:45:14Z) - Differentiable Bilevel Programming for Stackelberg Congestion Games [47.60156422249365]
In a Stackelberg congestion game (SCG), a leader aims to maximize their own gain by anticipating and manipulating the equilibrium state at which the followers settle by playing a congestion game.
Here, we attempt to tackle this computational challenge by marrying traditional methodologies with the latest differentiable programming techniques in machine learning.
We propose two new local search algorithms for SCGs. The first is a gradient descent algorithm that obtains the derivatives by unrolling ILD via differentiable programming.
The second algorithm adds a twist by cutting short the followers' evolution trajectory.
arXiv Detail & Related papers (2022-09-15T21:32:23Z) - Runtime Analysis of Competitive co-Evolutionary Algorithms for Maximin Optimisation of a Bilinear Function [1.3053649021965603]
Co-evolutionary algorithms have a wide range of applications, such as in hardware design, evolution of strategies for board games, and patching software bugs.
It is an open challenge to develop a theory that can predict when co-evolutionary algorithms find solutions efficiently and reliable.
This paper provides a first step in developing runtime analysis for population-based competitive co-evolutionary algorithms.
arXiv Detail & Related papers (2022-06-30T12:35:36Z) - An Effective and Efficient Evolutionary Algorithm for Many-Objective
Optimization [2.5594423685710814]
We develop an effective evolutionary algorithm (E3A) that can handle various many-objective problems.
In E3A, inspired by SDE, a novel population maintenance method is proposed.
We conduct extensive experiments and show that E3A performs better than 11 state-of-the-art many-objective evolutionary algorithms.
arXiv Detail & Related papers (2022-05-31T15:35:46Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Selection-Expansion: A Unifying Framework for Motion-Planning and
Diversity Search Algorithms [69.87173070473717]
We investigate the properties of two diversity search algorithms, the Novelty Search and the Goal Exploration Process algorithms.
The relation to MP algorithms reveals that the smoothness, or lack of smoothness of the mapping between the policy parameter space and the outcome space plays a key role in the search efficiency.
arXiv Detail & Related papers (2021-04-10T13:52:27Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - On Population-Based Algorithms for Distributed Constraint Optimization
Problems [12.21350091202884]
We study an emerging class of incomplete algorithms that are broadly termed as population-based algorithms.
Our first approach, Anytime Evolutionary DCOP or AED, exploits evolutionary optimization meta-heuristics to solve DCOPs.
While in our second contribution, we show that population-based approaches can be combined with local search approaches.
arXiv Detail & Related papers (2020-09-02T06:39:30Z) - CDE-GAN: Cooperative Dual Evolution Based Generative Adversarial Network [32.91091802159556]
Generative adversarial networks (GANs) have been a popular deep generative model for real-world applications.
We propose a Cooperative Dual Evolution based Generative Adversarial Network (CDE-GAN) to circumvent these drawbacks.
In essence, CDE-GAN incorporates dual evolution with respect to the generator(s) and discriminators into a unified adversarial framework.
arXiv Detail & Related papers (2020-08-21T09:39:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.