Evolving Strategies for Competitive Multi-Agent Search
- URL: http://arxiv.org/abs/2306.10640v2
- Date: Sat, 1 Jul 2023 21:43:59 GMT
- Title: Evolving Strategies for Competitive Multi-Agent Search
- Authors: Erkin Bahceci, Riitta Katila, and Risto Miikkulainen
- Abstract summary: This article first formalizes human creative problem solving as competitive multi-agent search (CMAS)
The main hypothesis is that evolutionary computation can be used to discover effective strategies for CMAS.
Different specialized strategies are evolved for each different competitive environment, and also general strategies that perform well across environments.
- Score: 12.699427247517031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While evolutionary computation is well suited for automatic discovery in
engineering, it can also be used to gain insight into how humans and
organizations could perform more effectively. Using a real-world problem of
innovation search in organizations as the motivating example, this article
first formalizes human creative problem solving as competitive multi-agent
search (CMAS). CMAS is different from existing single-agent and team search
problems in that the agents interact through knowledge of other agents'
searches and through the dynamic changes in the search landscape that result
from these searches. The main hypothesis is that evolutionary computation can
be used to discover effective strategies for CMAS; this hypothesis is verified
in a series of experiments on the NK model, i.e.\ partially correlated and
tunably rugged fitness landscapes. Different specialized strategies are evolved
for each different competitive environment, and also general strategies that
perform well across environments. These strategies are more effective and more
complex than hand-designed strategies and a strategy based on traditional tree
search. Using a novel spherical visualization of such landscapes, insight is
gained about how successful strategies work, e.g.\ by tracking positive changes
in the landscape. The article thus provides a possible framework for studying
various human creative activities as competitive multi-agent search in the
future.
Related papers
- Active Legibility in Multiagent Reinforcement Learning [3.7828554251478734]
The legibility-oriented framework allows agents to conduct legible actions so as to help others optimise their behaviors.
The experimental results demonstrate that the new framework is more efficient and costs less training time compared to several multiagent reinforcement learning algorithms.
arXiv Detail & Related papers (2024-10-28T12:15:49Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - CompetEvo: Towards Morphological Evolution from Competition [60.69068909395984]
We propose competitive evolution (CompetEvo), which co-evolves agents' designs and tactics in confrontation.
The results reveal that our method enables agents to evolve a more suitable design and strategy for fighting.
arXiv Detail & Related papers (2024-05-28T15:53:02Z) - Mathematics of multi-agent learning systems at the interface of game
theory and artificial intelligence [0.8049333067399385]
Evolutionary Game Theory and Artificial Intelligence are two fields that, at first glance, might seem distinct, but they have notable connections and intersections.
The former focuses on the evolution of behaviors (or strategies) in a population, where individuals interact with others and update their strategies based on imitation (or social learning)
The latter, meanwhile, is centered on machine learning algorithms and (deep) neural networks.
arXiv Detail & Related papers (2024-03-09T17:36:54Z) - Fast Peer Adaptation with Context-aware Exploration [63.08444527039578]
We propose a peer identification reward for learning agents in multi-agent games.
This reward motivates the agent to learn a context-aware policy for effective exploration and fast adaptation.
We evaluate our method on diverse testbeds that involve competitive (Kuhn Poker), cooperative (PO-Overcooked), or mixed (Predator-Prey-W) games with peer agents.
arXiv Detail & Related papers (2024-02-04T13:02:27Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - Optimal foraging strategies can be learned [0.0]
We explore optimal foraging strategies through a reinforcement learning framework.
We first prove theoretically that maximizing rewards in our reinforcement learning model is equivalent to optimizing foraging efficiency.
We then show with numerical experiments that, in the paradigmatic model of non-destructive search, our agents learn foraging strategies which outperform the efficiency of some of the best known strategies such as L'evy walks.
arXiv Detail & Related papers (2023-03-10T16:40:12Z) - Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity [49.68758494467258]
We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
arXiv Detail & Related papers (2021-10-08T11:29:52Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams [0.0]
We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
arXiv Detail & Related papers (2020-07-06T22:35:56Z) - Adaptive strategy in differential evolution via explicit exploitation
and exploration controls [0.0]
This paper proposes a new strategy adaptation method, named explicit adaptation scheme (Ea scheme)
Ea scheme separates multiple strategies and employs them on-demand.
Experimental studies on benchmark functions demonstrate the effectiveness of Ea scheme.
arXiv Detail & Related papers (2020-02-03T09:12:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.