Q-Learning-Driven Adaptive Rewiring for Cooperative Control in Heterogeneous Networks
- URL: http://arxiv.org/abs/2509.01057v2
- Date: Wed, 03 Sep 2025 03:24:53 GMT
- Title: Q-Learning-Driven Adaptive Rewiring for Cooperative Control in Heterogeneous Networks
- Authors: Yi-Ning Weng, Hsuan-Wei Lee,
- Abstract summary: We propose a Q-learning-based variant of adaptive rewiring that builds on mechanisms studied in the literature.<n>We show that fully adaptive rewiring enhances cooperation levels through systematic exploration of favorable network configurations.<n>Our results establish a new paradigm for understanding intelligence-driven cooperation pattern formation in complex adaptive systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooperation emergence in multi-agent systems represents a fundamental statistical physics problem where microscopic learning rules drive macroscopic collective behavior transitions. We propose a Q-learning-based variant of adaptive rewiring that builds on mechanisms studied in the literature. This method combines temporal difference learning with network restructuring so that agents can optimize strategies and social connections based on interaction histories. Through neighbor-specific Q-learning, agents develop sophisticated partnership management strategies that enable cooperator cluster formation, creating spatial separation between cooperative and defective regions. Using power-law networks that reflect real-world heterogeneous connectivity patterns, we evaluate emergent behaviors under varying rewiring constraint levels, revealing distinct cooperation patterns across parameter space rather than sharp thermodynamic transitions. Our systematic analysis identifies three behavioral regimes: a permissive regime (low constraints) enabling rapid cooperative cluster formation, an intermediate regime with sensitive dependence on dilemma strength, and a patient regime (high constraints) where strategic accumulation gradually optimizes network structure. Simulation results show that while moderate constraints create transition-like zones that suppress cooperation, fully adaptive rewiring enhances cooperation levels through systematic exploration of favorable network configurations. Quantitative analysis reveals that increased rewiring frequency drives large-scale cluster formation with power-law size distributions. Our results establish a new paradigm for understanding intelligence-driven cooperation pattern formation in complex adaptive systems, revealing how machine learning serves as an alternative driving force for spontaneous organization in multi-agent networks.
Related papers
- Social World Model-Augmented Mechanism Design Policy Learning [58.739456918502704]
We introduce SWM-AP (Social World Model-Augmented Mechanism Design Policy Learning), which learns a social world model hierarchically to enhance mechanism design.<n>We show that SWM-AP outperforms established model-based and model-free RL baselines in cumulative rewards and sample efficiency.
arXiv Detail & Related papers (2025-10-22T06:01:21Z) - Emergence of hybrid computational dynamics through reinforcement learning [0.0]
We show that reinforcement learning and supervised learning drive neural networks toward fundamentally different computational solutions.<n>We also show that RL sculpts functionally balanced neural populations through a powerful form of implicit regularization.<n>Our results establish the learning algorithm as a primary determinant of emergent computation.
arXiv Detail & Related papers (2025-10-13T08:53:59Z) - Deep Reinforcement Learning for Multi-Agent Coordination [8.250169938213558]
We propose a Stigmergic Multi-Agent Deep Reinforcement Learning (S-MADRL) framework that leverages virtual pheromones to model local and social interactions.<n>We show that our framework achieves the most effective coordination of up to eight agents, where robots self-organize into asymmetric workload distributions.<n>This emergent behavior, analogous to strategies observed in nature, demonstrates a scalable solution for decentralized multi-agent coordination in crowded environments.
arXiv Detail & Related papers (2025-10-04T00:47:20Z) - Power Grid Control with Graph-Based Distributed Reinforcement Learning [60.49805771047161]
This work advances a graph-based distributed reinforcement learning framework for real-time, scalable grid management.<n>A Graph Neural Network (GNN) is employed to encode the network's topological information within the single low-level agent's observation.<n>Experiments on the Grid2Op simulation environment show the effectiveness of the approach.
arXiv Detail & Related papers (2025-09-02T22:17:25Z) - Synchronization Dynamics of Heterogeneous, Collaborative Multi-Agent AI Systems [0.0]
We present a novel interdisciplinary framework that bridges synchronization theory and multi-agent AI systems.<n>We adapt the Kuramoto model to describe the collective dynamics of heterogeneous AI agents engaged in complex task execution.
arXiv Detail & Related papers (2025-08-17T10:16:41Z) - Connecting the geometry and dynamics of many-body complex systems with message passing neural operators [1.8434042562191815]
We introduce a scalable AI framework, ROMA, for learning multiscale evolution operators of many-body complex systems.<n>An attention mechanism is used to model multiscale interactions by connecting geometric representations of local subgraphs and dynamical operators.<n>We demonstrate that the ROMA framework improves scalability and positive transfer between forecasting and effective dynamics tasks.
arXiv Detail & Related papers (2025-02-21T20:04:09Z) - Evolving Neural Networks Reveal Emergent Collective Behavior from Minimal Agent Interactions [0.0]
We investigate how neural networks evolve to control agents' behavior in a dynamic environment.
Simpler behaviors, such as lane formation and laminar flow, are characterized by more linear network operations.
Specific environmental parameters, such as moderate noise, broader field of view, and lower agent density, promote the evolution of non-linear networks.
arXiv Detail & Related papers (2024-10-25T17:43:00Z) - Navigating the swarm: Deep neural networks command emergent behaviours [2.7059353835118602]
We show that it is possible to generate coordinated structures in collective behavior with intended global patterns by fine-tuning an inter-agent interaction rule.
Our strategy employs deep neural networks, obeying the laws of dynamics, to find interaction rules that command desired structures.
Our findings pave the way for new applications in robotic swarm operations, active matter organisation, and for the uncovering of obscure interaction rules in biological systems.
arXiv Detail & Related papers (2024-07-16T02:46:11Z) - Distributed Autonomous Swarm Formation for Dynamic Network Bridging [40.27919181139919]
We formulate the problem of dynamic network bridging in a novel Decentralized Partially Observable Markov Decision Process (Dec-POMDP)
We propose a Multi-Agent Reinforcement Learning (MARL) approach for the problem based on Graph Convolutional Reinforcement Learning (DGN)
The proposed method is evaluated in a simulated environment and compared to a centralized baseline showing promising results.
arXiv Detail & Related papers (2024-04-02T01:45:03Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.