Partial Graph Reasoning for Neural Network Regularization
- URL: http://arxiv.org/abs/2106.01805v1
- Date: Thu, 3 Jun 2021 12:57:01 GMT
- Title: Partial Graph Reasoning for Neural Network Regularization
- Authors: Tiange Xiang, Chaoyi Zhang, Yang Song, Siqi Liu, Hongliang Yuan,
Weidong Cai
- Abstract summary: Dropout,as a commonly used regularization technique, disables neuron ac-tivations during network optimization.
We present DropGraph that learns regularization function by constructing a stand-alone graph from the backbone features.
This add-on graph regularizes the network during training and can be completely skipped during inference.
- Score: 26.793648333908905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularizers helped deep neural networks prevent feature co-adaptations.
Dropout,as a commonly used regularization technique, stochastically disables
neuron ac-tivations during network optimization. However, such complete feature
disposal can affect the feature representation and network understanding.
Toward betterdescriptions of latent representations, we present DropGraph that
learns regularization function by constructing a stand-alone graph from the
backbone features. DropGraph first samples stochastic spatial feature vectors
and then incorporates graph reasoning methods to generate feature map
distortions. This add-on graph regularizes the network during training and can
be completely skipped during inference. We provide intuitions on the linkage
between graph reasoning andDropout with further discussions on how partial
graph reasoning method reduces feature correlations. To this end, we
extensively study the modeling of graphvertex dependencies and the utilization
of the graph for distorting backbone featuremaps. DropGraph was validated on
four tasks with a total of 7 different datasets.The experimental results show
that our method outperforms other state-of-the-art regularizers while leaving
the base model structure unmodified during inference.
Related papers
- Saliency-Aware Regularized Graph Neural Network [39.82009838086267]
We propose the Saliency-Aware Regularized Graph Neural Network (SAR-GNN) for graph classification.
We first estimate the global node saliency by measuring the semantic similarity between the compact graph representation and node features.
Then the learned saliency distribution is leveraged to regularize the neighborhood aggregation of the backbone.
arXiv Detail & Related papers (2024-01-01T13:44:16Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Node Copying: A Random Graph Model for Effective Graph Sampling [35.957719744856696]
We introduce the node copying model for constructing a distribution over graphs.
We show the usefulness of the copying model in three tasks.
We employ our proposed model to mitigate the effect of adversarial attacks on the graph topology.
arXiv Detail & Related papers (2022-08-04T04:04:49Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Intrusion-Free Graph Mixup [33.07540841212878]
We present a simple and yet effective regularization technique to improve the generalization of Graph Neural Networks (GNNs)
We leverage the recent advances in Mixup regularizer for vision and text, where random sample pairs and their labels are interpolated to create synthetic samples for training.
Our method can effectively regularize the graph classification learning, resulting in superior predictive accuracy over popular graph augmentation baselines.
arXiv Detail & Related papers (2021-10-18T14:16:00Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.