Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
- URL: http://arxiv.org/abs/2502.17612v2
- Date: Wed, 26 Feb 2025 16:51:46 GMT
- Title: Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
- Authors: Taos Transue, Bao Wang,
- Abstract summary: Decentralized controllers struggle to maintain flock cohesion.<n> graph neural network (GNN) architecture has emerged as indispensable machine learning tool for developing decentralized controllers.<n>We show that our symmetry-aware controller generalizes better than existing GNN controllers.
- Score: 11.194306044434502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The orchestration of agents to optimize a collective objective without centralized control is challenging yet crucial for applications such as controlling autonomous fleets, and surveillance and reconnaissance using sensor networks. Decentralized controller design has been inspired by self-organization found in nature, with a prominent source of inspiration being flocking; however, decentralized controllers struggle to maintain flock cohesion. The graph neural network (GNN) architecture has emerged as an indispensable machine learning tool for developing decentralized controllers capable of maintaining flock cohesion, but they fail to exploit the symmetries present in flocking dynamics, hindering their generalizability. We enforce rotation equivariance and translation invariance symmetries in decentralized flocking GNN controllers and achieve comparable flocking control with 70% less training data and 75% fewer trainable weights than existing GNN controllers without these symmetries enforced. We also show that our symmetry-aware controller generalizes better than existing GNN controllers. Code and animations are available at http://github.com/Utah-Math-Data-Science/Equivariant-Decentralized-Controllers.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - A comparison of RL-based and PID controllers for 6-DOF swimming robots:
hybrid underwater object tracking [8.362739554991073]
We present an exploration and assessment of employing a centralized deep Q-network (DQN) controller as a substitute for PID controllers.
Our primary focus centers on illustrating this transition with the specific case of underwater object tracking.
Our experiments, conducted within a Unity-based simulator, validate the effectiveness of a centralized RL agent over separated PID controllers.
arXiv Detail & Related papers (2024-01-29T23:14:15Z) - Centralized and Decentralized Control in Modular Robots and Their Effect
on Morphology [1.4502611532302039]
We study the effects of centralized and decentralized controllers on modular robot performance and morphologies.
A decentralized approach that was more independent of morphology size performed significantly better than the other approaches.
arXiv Detail & Related papers (2022-06-27T15:22:46Z) - Deep Reinforcement Learning for Wireless Scheduling in Distributed Networked Control [37.10638636086814]
We consider a joint uplink and downlink scheduling problem of a fully distributed wireless control system (WNCS) with a limited number of frequency channels.
We develop a deep reinforcement learning (DRL) based framework for solving it.
To tackle the challenges of a large action space in DRL, we propose novel action space reduction and action embedding methods.
arXiv Detail & Related papers (2021-09-26T11:27:12Z) - Scalable Perception-Action-Communication Loops with Convolutional and
Graph Neural Networks [208.15591625749272]
We present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI)
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning.
We demonstrate that VGAI yields performance comparable to or better than other decentralized controllers.
arXiv Detail & Related papers (2021-06-24T23:57:21Z) - Communication Topology Co-Design in Graph Recurrent Neural Network Based
Distributed Control [4.492630871726495]
We introduce a compact but expressive graph recurrent neural network (GRNN) parameterization of distributed controllers.
Our proposed parameterization enjoys a local and distributed architecture, similar to previous Graph Neural Network (GNN)-based parameterizations.
We show that our method allows for performance/communication density tradeoff curves to be efficiently approximated.
arXiv Detail & Related papers (2021-04-28T16:30:02Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Graph Neural Networks for Decentralized Controllers [171.6642679604005]
Dynamical systems comprised of autonomous agents arise in many relevant problems such as robotics, smart grids, or smart cities.
Optimal centralized controllers are readily available but face limitations in terms of scalability and practical implementation.
We propose a framework using graph neural networks (GNNs) to learn decentralized controllers from data.
arXiv Detail & Related papers (2020-03-23T13:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.