Gradient Gating for Deep Multi-Rate Learning on Graphs
- URL: http://arxiv.org/abs/2210.00513v1
- Date: Sun, 2 Oct 2022 13:19:48 GMT
- Title: Gradient Gating for Deep Multi-Rate Learning on Graphs
- Authors: T. Konstantin Rusch, Benjamin P. Chamberlain, Michael W. Mahoney,
Michael M. Bronstein, Siddhartha Mishra
- Abstract summary: We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
- Score: 62.25886489571097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Gradient Gating (G$^2$), a novel framework for improving the
performance of Graph Neural Networks (GNNs). Our framework is based on gating
the output of GNN layers with a mechanism for multi-rate flow of message
passing information across nodes of the underlying graph. Local gradients are
harnessed to further modulate message passing updates. Our framework flexibly
allows one to use any basic GNN layer as a wrapper around which the multi-rate
gradient gating mechanism is built. We rigorously prove that G$^2$ alleviates
the oversmoothing problem and allows the design of deep GNNs. Empirical results
are presented to demonstrate that the proposed framework achieves
state-of-the-art performance on a variety of graph learning tasks, including on
large-scale heterophilic graphs.
Related papers
- Graph Structure Prompt Learning: A Novel Methodology to Improve Performance of Graph Neural Networks [13.655670509818144]
We propose a novel Graph structure Prompt Learning method (GPL) to enhance the training of Graph networks (GNNs)
GPL employs task-independent graph structure losses to encourage GNNs to learn intrinsic graph characteristics while simultaneously solving downstream tasks.
In experiments on eleven real-world datasets, after being trained by neural prediction, GNNs significantly outperform their original performance on node classification, graph classification, and edge tasks.
arXiv Detail & Related papers (2024-07-16T03:59:18Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Simple yet Effective Gradient-Free Graph Convolutional Networks [20.448409424929604]
Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.
In this paper, we relate over-smoothing with the vanishing gradient phenomenon and craft a gradient-free training framework.
Our methods achieve better and more stable performances on node classification tasks with varying depths and cost much less training time.
arXiv Detail & Related papers (2023-02-01T11:00:24Z) - EEGNN: Edge Enhanced Graph Neural Networks [1.0246596695310175]
We propose a new explanation for such deteriorated performance phenomenon, mis-simplification.
We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs.
EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model to improve the performance of various deep message-passing GNNs.
arXiv Detail & Related papers (2022-08-12T15:24:55Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Graph Neural Networks for Graph Drawing [17.983238300054527]
We propose a novel framework for the development of Graph Neural Drawers (GND)
GNDs rely on neural computation for constructing efficient and complex maps.
We prove that this mechanism can be guided by loss functions computed by means of Feedforward Neural Networks.
arXiv Detail & Related papers (2021-09-21T09:58:02Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.