HyperMARL: Adaptive Hypernetworks for Multi-Agent RL
- URL: http://arxiv.org/abs/2412.04233v3
- Date: Thu, 22 May 2025 15:28:13 GMT
- Title: HyperMARL: Adaptive Hypernetworks for Multi-Agent RL
- Authors: Kale-ab Abebe Tessera, Arrasy Rahman, Amos Storkey, Stefano V. Albrecht,
- Abstract summary: HyperMARL is a PS approach using hypernetworks for dynamic agent-specific parameters.<n>It reduces policy gradient variance, facilitates shared-policy adaptation, and helps mitigate cross-agent interference.<n>These findings establish HyperMARL as a versatile approach for adaptive MARL.
- Score: 9.154125291830058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptability to specialised or homogeneous behaviours is critical in cooperative multi-agent reinforcement learning (MARL). Parameter sharing (PS) techniques, common for efficient adaptation, often limit behavioural diversity due to cross-agent gradient interference, which we show can be exacerbated by the coupling of observations and agent IDs. Current remedies typically add complexity through altered objectives, manual preset diversity levels, or sequential updates. We ask: can shared policies adapt without these complexities? We propose HyperMARL, a PS approach using hypernetworks for dynamic agent-specific parameters, without altering the RL objective or requiring preset diversity levels. HyperMARL's explicit decoupling of observation- and agent-conditioned gradients empirically reduces policy gradient variance, facilitates shared-policy adaptation (including specialisation), and helps mitigate cross-agent interference. Across diverse MARL benchmarks (up to 20 agents), requiring homogeneous, heterogeneous, or mixed behaviours, HyperMARL achieves competitive performance against key baselines -- fully shared, non-parameter sharing, and three diversity-promoting methods -- while preserving behavioural diversity comparable to non-parameter sharing. These findings establish HyperMARL as a versatile approach for adaptive MARL. The code is publicly available at https://github.com/KaleabTessera/HyperMARL.
Related papers
- Adaptability in Multi-Agent Reinforcement Learning: A Framework and Unified Review [9.246912481179464]
Multi-Agent Reinforcement Learning (MARL) has shown clear effectiveness in coordinating multiple agents across simulated benchmarks and constrained scenarios.<n>This survey contributes to the development of algorithms that are better suited for deployment in dynamic, real-world multi-agent systems.
arXiv Detail & Related papers (2025-07-14T10:39:17Z) - Offline Multi-agent Reinforcement Learning via Score Decomposition [51.23590397383217]
offline cooperative multi-agent reinforcement learning (MARL) faces unique challenges due to distributional shifts.<n>This work is the first work to explicitly address the distributional gap between offline and online MARL.
arXiv Detail & Related papers (2025-05-09T11:42:31Z) - Low-Rank Agent-Specific Adaptation (LoRASA) for Multi-Agent Policy Learning [3.333453555166201]
Multi-agent reinforcement learning (MARL) often relies on emph parameter sharing (PS) to scale efficiently.<n>We propose textbfLow-Rank Agent-Specific Adaptation (LoRASA), a novel approach that treats each agent's policy as a specialized task'' fine-tuned from a shared backbone.<n>We evaluate LoRASA on challenging benchmarks including the StarCraft Multi-Agent Challenge (SMAC) and Multi-Agent MuJoCo (MAMuJoCo)
arXiv Detail & Related papers (2025-02-08T13:57:53Z) - Heterogeneous Multi-Agent Reinforcement Learning for Distributed Channel Access in WLANs [47.600901884970845]
This paper investigates the use of multi-agent reinforcement learning (MARL) to address distributed channel access in wireless local area networks.
In particular, we consider the challenging yet more practical case where the agents heterogeneously adopt value-based or policy-based reinforcement learning algorithms to train the model.
We propose a heterogeneous MARL training framework, named QPMIX, which adopts a centralized training with distributed execution paradigm to enable heterogeneous agents to collaborate.
arXiv Detail & Related papers (2024-12-18T13:50:31Z) - SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents [14.08299391695986]
We propose a sparse mixture-of-agents (SMoA) framework to improve the efficiency and diversity of multi-agent LLMs.
SMoA introduces novel Response Selection and Early Stopping mechanisms to sparsify information flows among individual LLM agents.
Experiments on reasoning, alignment, and fairness benchmarks demonstrate that SMoA achieves performance comparable to traditional mixture-of-agents approaches.
arXiv Detail & Related papers (2024-11-05T17:33:39Z) - Task-Aware Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning [70.96345405979179]
The purpose of offline multi-task reinforcement learning (MTRL) is to develop a unified policy applicable to diverse tasks without the need for online environmental interaction.
variations in task content and complexity pose significant challenges in policy formulation.
We introduce the Harmony Multi-Task Decision Transformer (HarmoDT), a novel solution designed to identify an optimal harmony subspace of parameters for each task.
arXiv Detail & Related papers (2024-11-02T05:49:14Z) - Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning [14.01772209044574]
We introduce emphKaleidoscope, a novel adaptive partial parameter sharing scheme.
It promotes diversity among policy networks by encouraging discrepancy among these masks, without sacrificing the efficiencies of parameter sharing.
We extend Kaleidoscope to critic ensembles in the context of actor-critic algorithms, which could help improve value estimations.
arXiv Detail & Related papers (2024-10-11T05:22:54Z) - MoME: Mixture of Multimodal Experts for Cancer Survival Prediction [46.520971457396726]
Survival analysis, as a challenging task, requires integrating Whole Slide Images (WSIs) and genomic data for comprehensive decision-making.
Previous approaches utilize co-attention methods, which fuse features from both modalities only once after separate encoding.
We propose a Biased Progressive Clever (BPE) paradigm, performing encoding and fusion simultaneously.
arXiv Detail & Related papers (2024-06-14T03:44:33Z) - HarmoDT: Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning [72.25707314772254]
We introduce the Harmony Multi-Task Decision Transformer (HarmoDT), a novel solution designed to identify an optimal harmony subspace of parameters for each task.
The upper level of this framework is dedicated to learning a task-specific mask that delineates the harmony subspace, while the inner level focuses on updating parameters to enhance the overall performance of the unified policy.
arXiv Detail & Related papers (2024-05-28T11:41:41Z) - Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning [8.905920197601173]
We introduce Diversity Control (DiCo), a method able to control diversity to an exact value of a given metric.
We show how DiCo can be employed as a novel paradigm to increase performance and sample efficiency in Multi-Agent Reinforcement Learning.
arXiv Detail & Related papers (2024-05-23T21:03:33Z) - Heterogeneous Multi-Agent Reinforcement Learning for Zero-Shot Scalable Collaboration [5.326588461041464]
Multi-agent reinforcement learning (MARL) is transforming fields like autonomous vehicle networks.
MARL strategies for different roles can be updated flexibly according to the scales, which is still a challenge for current MARL frameworks.
We propose a novel MARL framework named Scalable and Heterogeneous Proximal Policy Optimization (SHPPO)
We show SHPPO exhibits superior performance in classic MARL environments like Starcraft Multi-Agent Challenge (SMAC) and Google Research Football (GRF)
arXiv Detail & Related papers (2024-04-05T03:02:57Z) - Adaptive parameter sharing for multi-agent reinforcement learning [15.716649118116514]
We propose a novel parameter sharing method inspired by research pertaining to the brain in biology.
It maps each type of agent to different regions within a shared network based on their identity, resulting in distinctworks.
Our method can increase the diversity of strategies among different agents without additional training parameters.
arXiv Detail & Related papers (2023-12-14T15:00:32Z) - AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline
Multi-Agent RL via Alternating Stationary Distribution Correction Estimation [65.4532392602682]
One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy.
This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation.
We introduce AlberDICE, an offline MARL algorithm that performs centralized training of individual agents based on stationary distribution optimization.
arXiv Detail & Related papers (2023-11-03T18:56:48Z) - Source-free Domain Adaptation Requires Penalized Diversity [60.04618512479438]
Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data.
In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor.
We propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors.
arXiv Detail & Related papers (2023-04-06T00:20:19Z) - Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep
Reinforcement Learning [20.35644044703191]
We propose a simple method that adopts structured pruning for a deep neural network to increase the representational capacity of the joint policy without introducing additional parameters.
We evaluate the proposed method on several benchmark tasks, and numerical results show that the proposed method significantly outperforms other parameter-sharing methods.
arXiv Detail & Related papers (2023-03-02T02:17:14Z) - Policy Diagnosis via Measuring Role Diversity in Cooperative Multi-agent
RL [107.58821842920393]
We quantify the agent's behavior difference and build its relationship with the policy performance via bf Role Diversity
We find that the error bound in MARL can be decomposed into three parts that have a strong relation to the role diversity.
The decomposed factors can significantly impact policy optimization on three popular directions.
arXiv Detail & Related papers (2022-06-01T04:58:52Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.