Anti-Symmetric DGN: a stable architecture for Deep Graph Networks
- URL: http://arxiv.org/abs/2210.09789v1
- Date: Tue, 18 Oct 2022 12:04:55 GMT
- Title: Anti-Symmetric DGN: a stable architecture for Deep Graph Networks
- Authors: Alessio Gravina, Davide Bacciu, Claudio Gallicchio
- Abstract summary: We present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design.
A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.
- Score: 12.71306369339218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Graph Networks (DGNs) currently dominate the research landscape of
learning from graphs, due to their efficiency and ability to implement an
adaptive message-passing scheme between the nodes. However, DGNs are typically
limited in their ability to propagate and preserve long-term dependencies
between nodes, \ie they suffer from the over-squashing phenomena. This reduces
their effectiveness, since predictive problems may require to capture
interactions at different, and possibly large, radii in order to be effectively
solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a
framework for stable and non-dissipative DGN design, conceived through the lens
of ordinary differential equations. We give theoretical proof that our method
is stable and non-dissipative, leading to two key results: long-range
information between nodes is preserved, and no gradient vanishing or explosion
occurs in training. We empirically validate the proposed approach on several
graph benchmarks, showing that A-DGN yields to improved performance and enables
to learn effectively even when dozens of layers are used.
Related papers
- Robust Node Representation Learning via Graph Variational Diffusion
Networks [7.335425547621226]
In recent years, compelling evidence has revealed that GNN-based node representation learning can be substantially deteriorated by perturbations in a graph structure.
To learn robust node representation in the presence of perturbations, various works have been proposed to safeguard GNNs.
We propose the Graph Variational Diffusion Network (GVDN), a new node encoder that effectively manipulates Gaussian noise to safeguard robustness on perturbed graphs.
arXiv Detail & Related papers (2023-12-18T03:18:53Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Understanding Oversquashing in GNNs through the Lens of Effective
Resistance [9.640594614636047]
We develop an algorithm to identify edges to be added to an input graph to minimize the total effective resistance, thereby alleviating oversquashing.
We provide empirical evidence of the effectiveness of our total effective resistance based rewiring strategies for improving the performance of GNNs.
arXiv Detail & Related papers (2023-02-14T05:16:12Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.