DropMessage: Unifying Random Dropping for Graph Neural Networks
- URL: http://arxiv.org/abs/2204.10037v1
- Date: Thu, 21 Apr 2022 11:42:49 GMT
- Title: DropMessage: Unifying Random Dropping for Graph Neural Networks
- Authors: Taoran Fang, Zhiqing Xiao, Chunping Wang, Jiarong Xu, Xuan Yang, Yang
Yang
- Abstract summary: Graph Neural Networks (GNNs) are powerful tools for graph representation learning.
Previous works indicate that these problems can be alleviated by random dropping methods, which integrate noises into models by randomly masking parts of the input.
We propose a novel random dropping method called DropMessage, which performs dropping operations directly on the message matrix and can be applied to any message-passing GNNs.
- Score: 9.134120615545866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful tools for graph representation
learning. Despite their rapid development, GNNs also faces some challenges,
such as over-fitting, over-smoothing, and non-robustness. Previous works
indicate that these problems can be alleviated by random dropping methods,
which integrate noises into models by randomly masking parts of the input.
However, some open-ended problems of random dropping on GNNs remain to solve.
First, it is challenging to find a universal method that are suitable for all
cases considering the divergence of different datasets and models. Second,
random noises introduced to GNNs cause the incomplete coverage of parameters
and unstable training process. In this paper, we propose a novel random
dropping method called DropMessage, which performs dropping operations directly
on the message matrix and can be applied to any message-passing GNNs.
Furthermore, we elaborate the superiority of DropMessage: it stabilizes the
training process by reducing sample variance; it keeps information diversity
from the perspective of information theory, which makes it a theoretical upper
bound of other methods. Also, we unify existing random dropping methods into
our framework and analyze their effects on GNNs. To evaluate our proposed
method, we conduct experiments that aims for multiple tasks on five public
datasets and two industrial datasets with various backbone models. The
experimental results show that DropMessage has both advantages of effectiveness
and generalization.
Related papers
- FlexiDrop: Theoretical Insights and Practical Advances in Random Dropout Method on GNNs [4.52430575477004]
We propose a novel random dropout method for Graph Neural Networks (GNNs) called FlexiDrop.
We show that our method enables adaptive adjustment of the dropout rate and theoretically balances the trade-off between model complexity and generalization ability.
arXiv Detail & Related papers (2024-05-30T12:48:44Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Neural Graph Revealers [2.2721854258621064]
We propose Neural Graph Revealers (NGRs) to efficiently merge sparse graph recovery methods with Probabilistic Graphical Models.
NGRs view the neural networks as a glass box' or more specifically as a multitask learning framework.
We show experimental results of doing sparse graph recovery and probabilistic inference on data from Gaussian graphical models and a multimodal infant mortality dataset by Centers for Disease Control and Prevention.
arXiv Detail & Related papers (2023-02-27T08:40:45Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Walk for Learning: A Random Walk Approach for Federated Learning from
Heterogeneous Data [17.978941229970886]
We focus on Federated Learning (FL) as a canonical application.
One of the main challenges of FL is the communication bottleneck between the nodes and the parameter server.
We present an adaptive random walk learning algorithm.
arXiv Detail & Related papers (2022-06-01T19:53:24Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Advanced Dropout: A Model-free Methodology for Bayesian Dropout
Optimization [62.8384110757689]
Overfitting ubiquitously exists in real-world applications of deep neural networks (DNNs)
The advanced dropout technique applies a model-free and easily implemented distribution with parametric prior, and adaptively adjusts dropout rate.
We evaluate the effectiveness of the advanced dropout against nine dropout techniques on seven computer vision datasets.
arXiv Detail & Related papers (2020-10-11T13:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.