SSFG: Stochastically Scaling Features and Gradients for Regularizing
Graph Convolution Networks
- URL: http://arxiv.org/abs/2102.10338v1
- Date: Sat, 20 Feb 2021 12:59:48 GMT
- Title: SSFG: Stochastically Scaling Features and Gradients for Regularizing
Graph Convolution Networks
- Authors: Haimin Zhang, Min Xu
- Abstract summary: Repeatedly applying graph convolutions can cause the oversmoothing issue.
We present a regularization method to address this issue.
Our method effectively improves the overall performance of the baseline graph networks.
- Score: 7.075802972628797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph convolutional networks have been successfully applied in various
graph-based tasks. In a typical graph convolutional layer, node features are
computed by aggregating neighborhood information. Repeatedly applying graph
convolutions can cause the oversmoothing issue, i.e., node features converge to
similar values. This is one of the major reasons that cause overfitting in
graph learning, resulting in the model fitting well to training data while not
generalizing well on test data. In this paper, we present a stochastic
regularization method to address this issue. In our method, we stochastically
scale features and gradients (SSFG) by a factor sampled from a probability
distribution in the training procedure. We show that applying stochastic
scaling at the feature level is complementary to that at the gradient level in
improving the overall performance. When used together with ReLU, our method can
be seen as a stochastic ReLU. We experimentally validate our SSFG
regularization method on seven benchmark datasets for different graph-based
tasks. Extensive experimental results demonstrate that our method effectively
improves the overall performance of the baseline graph networks.
Related papers
- Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Addressing Heterophily in Node Classification with Graph Echo State
Networks [11.52174067809364]
We address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification.
GESN is a reservoir computing model for graphs, where node embeddings are computed by an untrained message-passing function.
Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models.
arXiv Detail & Related papers (2023-05-14T19:42:31Z) - From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks [0.0]
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks.
We study the theory that paved the way to the definition of GCN, including related parts of classical graph theory.
arXiv Detail & Related papers (2022-07-12T16:57:08Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Effective Eigendecomposition based Graph Adaptation for Heterophilic
Networks [0.5309004257911242]
We present an eigendecomposition based approach and propose EigenNetwork models that improve the performance of GNNs on heterophilic graphs.
Our approach achieves up to 11% improvement in performance over the state-of-the-art methods on heterophilic graphs.
arXiv Detail & Related papers (2021-07-28T12:14:07Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.