Double Deep Q-Learning in Opponent Modeling
- URL: http://arxiv.org/abs/2211.15384v1
- Date: Thu, 24 Nov 2022 06:07:47 GMT
- Title: Double Deep Q-Learning in Opponent Modeling
- Authors: Yangtianze Tao and John Doe
- Abstract summary: Multi-agent systems in which secondary agents with conflicting agendas also alter their methods need opponent modeling.
In this study, we simulate the main agent's and secondary agents' tactics using Double Deep Q-Networks (DDQN) with a prioritized experience replay mechanism.
Under the opponent modeling setup, a Mixture-of-Experts architecture is used to identify various opponent strategy patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent systems in which secondary agents with conflicting agendas also
alter their methods need opponent modeling. In this study, we simulate the main
agent's and secondary agents' tactics using Double Deep Q-Networks (DDQN) with
a prioritized experience replay mechanism. Then, under the opponent modeling
setup, a Mixture-of-Experts architecture is used to identify various opponent
strategy patterns. Finally, we analyze our models in two environments with
several agents. The findings indicate that the Mixture-of-Experts model, which
is based on opponent modeling, performs better than DDQN.
Related papers
- Generalizable Agent Modeling for Agent Collaboration-Competition Adaptation with Multi-Retrieval and Dynamic Generation [19.74776726500979]
Adapting a single agent to a new multi-agent system brings challenges, necessitating adjustments across various tasks, environments, and interactions with unknown teammates and opponents.<n>We propose a more comprehensive setting, Agent Collaborative-Competitive Adaptation, which evaluates an agent to generalize across diverse scenarios.<n>In ACCA, agents adjust to task and environmental changes, collaborate with unseen teammates, and compete against unknown opponents.
arXiv Detail & Related papers (2025-06-20T03:28:18Z) - Keep on Swimming: Real Attackers Only Need Partial Knowledge of a Multi-Model System [0.0]
We introduce a method to craft an adversarial attack against the overall multi-model system.
To our knowledge, this is the first attack specifically designed for this threat model.
arXiv Detail & Related papers (2024-10-30T22:23:16Z) - xLAM: A Family of Large Action Models to Empower AI Agent Systems [111.5719694445345]
We release xLAM, a series of large action models designed for AI agent tasks.
xLAM consistently delivers exceptional performance across multiple agent ability benchmarks.
arXiv Detail & Related papers (2024-09-05T03:22:22Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment [69.33930972652594]
We propose a novel structural pruning approach to jointly learn the weights and structurally prune architectures of CNN models.
The core element of our method is a Reinforcement Learning (RL) agent whose actions determine the pruning ratios of the CNN model's layers.
We conduct the joint training and pruning by iteratively training the model's weights and the agent's policy.
arXiv Detail & Related papers (2024-03-28T15:22:29Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Combining Deep Reinforcement Learning and Search with Generative Models for Game-Theoretic Opponent Modeling [30.465929764202155]
We introduce a scalable and generic multiagent training regime for opponent modeling using deep game-theoretic reinforcement learning.<n>We first propose Generative Best Respoonse (GenBR), a best response algorithm based on Monte-Carlo Tree Search (MCTS)<n>We use this new method under the framework of Policy Space Response Oracles (PSRO) to automate the generation of an emphoffline opponent model.
arXiv Detail & Related papers (2023-02-01T23:06:23Z) - Decision-making with Speculative Opponent Models [10.594910251058087]
We introduce Distributional Opponent-aided Multi-agent Actor-Critic (DOMAC)
DOMAC is the first speculative opponent modelling algorithm that relies solely on local information (i.e., the controlled agent's observations, actions, and rewards)
arXiv Detail & Related papers (2022-11-22T01:29:47Z) - MetaQA: Combining Expert Agents for Multi-Skill Question Answering [49.35261724460689]
We argue that despite the promising results of multi-dataset models, some domains or QA formats might require specific architectures.
We propose to combine expert agents with a novel, flexible, and training-efficient architecture that considers questions, answer predictions, and answer-prediction confidence scores.
arXiv Detail & Related papers (2021-12-03T14:05:52Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z) - Pareto-Optimal Bit Allocation for Collaborative Intelligence [39.11380888887304]
Collaborative intelligence (CI) has emerged as a promising framework for deployment of Artificial Intelligence (AI)-based services on mobile/edge devices.
In this paper, we study bit allocation for feature coding in multi-stream CI systems.
arXiv Detail & Related papers (2020-09-25T20:48:33Z) - Learning to Model Opponent Learning [11.61673411387596]
Multi-Agent Reinforcement Learning (MARL) considers settings in which a set of coexisting agents interact with one another and their environment.
This poses a great challenge for value function-based algorithms whose convergence usually relies on the assumption of a stationary environment.
We develop a novel approach to modelling an opponent's learning dynamics which we term Learning to Model Opponent Learning (LeMOL)
arXiv Detail & Related papers (2020-06-06T17:19:04Z) - Variational Autoencoders for Opponent Modeling in Multi-Agent Systems [9.405879323049659]
Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment.
In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies.
Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system.
arXiv Detail & Related papers (2020-01-29T13:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.