PEnGUiN: Partially Equivariant Graph NeUral Networks for Sample Efficient MARL
- URL: http://arxiv.org/abs/2503.15615v1
- Date: Wed, 19 Mar 2025 18:01:14 GMT
- Title: PEnGUiN: Partially Equivariant Graph NeUral Networks for Sample Efficient MARL
- Authors: Joshua McClellan, Greyson Brothers, Furong Huang, Pratap Tokekar,
- Abstract summary: This paper introduces textitPartially Equivariant Graph NeUral Networks (PEnGUiN), a novel architecture specifically designed to address these challenges.<n>We theoretically demonstrate that PEnGUiN is capable of learning both fully equivariant (EGNN) and non-equivariant (GNN) representations within a unified framework.<n>Our results consistently demonstrate that PEnGUiN outperforms both EGNNs and standard GNNs in asymmetric environments.
- Score: 35.649751253539534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Equivariant Graph Neural Networks (EGNNs) have emerged as a promising approach in Multi-Agent Reinforcement Learning (MARL), leveraging symmetry guarantees to greatly improve sample efficiency and generalization. However, real-world environments often exhibit inherent asymmetries arising from factors such as external forces, measurement inaccuracies, or intrinsic system biases. This paper introduces \textit{Partially Equivariant Graph NeUral Networks (PEnGUiN)}, a novel architecture specifically designed to address these challenges. We formally identify and categorize various types of partial equivariance relevant to MARL, including subgroup equivariance, feature-wise equivariance, regional equivariance, and approximate equivariance. We theoretically demonstrate that PEnGUiN is capable of learning both fully equivariant (EGNN) and non-equivariant (GNN) representations within a unified framework. Through extensive experiments on a range of MARL problems incorporating various asymmetries, we empirically validate the efficacy of PEnGUiN. Our results consistently demonstrate that PEnGUiN outperforms both EGNNs and standard GNNs in asymmetric environments, highlighting their potential to improve the robustness and applicability of graph-based MARL algorithms in real-world scenarios.
Related papers
- Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance [34.322201578399394]
Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization.
We present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2.
E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our tests.
arXiv Detail & Related papers (2024-10-03T15:25:37Z) - Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts [21.397064770689795]
Group Equivariant CNNs (G-CNNs) have shown promising efficacy in various tasks, owing to their ability to capture hierarchical features in an equivariant manner.
We propose a novel approach, Variational Partial G-CNN (VP G-CNN), to capture varying levels of partial equivariance specific to each data instance.
We demonstrate the effectiveness of VP G-CNN on both toy and real-world datasets, including M67-180, CIFAR10, ColorMNIST, and Flowers102.
arXiv Detail & Related papers (2024-07-05T05:52:51Z) - Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Physical Dynamics Learning [39.25135680793105]
We propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group.
Specifically, we show that such discrete equivariant message passing could be constructed by transforming geometric features into permutation-invariant embeddings.
We show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
arXiv Detail & Related papers (2024-06-24T03:37:51Z) - A Probabilistic Approach to Learning the Degree of Equivariance in Steerable CNNs [5.141137421503899]
Steerable convolutional neural networks (SCNNs) enhance task performance by modelling geometric symmetries.
Yet, unknown or varying symmetries can lead to overconstrained weights and decreased performance.
This paper introduces a probabilistic method to learn the degree of equivariance in SCNNs.
arXiv Detail & Related papers (2024-06-06T10:45:19Z) - Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization [51.913685334368104]
We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
arXiv Detail & Related papers (2023-12-18T07:26:56Z) - Distribution-Informed Adaptation for kNN Graph Construction [11.63039933401604]
We propose the Distribution-Informed adaptive kNN Graph (DaNNG), which combines adaptive kNN with distribution-aware graph construction.
DaNNG significantly improves performance on ambiguous samples, and hence enhance overall accuracy and generalization capability.
arXiv Detail & Related papers (2023-08-04T16:14:43Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Equivariant Neural Network for Factor Graphs [83.26543234955855]
We propose two inference models: Factor-Equivariant Neural Belief Propagation (FE-NBP) and Factor-Equivariant Graph Neural Networks (FE-GNN)
FE-NBP achieves state-of-the-art performance on small datasets while FE-GNN achieves state-of-the-art performance on large datasets.
arXiv Detail & Related papers (2021-09-29T06:54:04Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.