Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping
- URL: http://arxiv.org/abs/2106.02892v1
- Date: Sat, 5 Jun 2021 13:20:36 GMT
- Title: Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping
- Authors: Zhan Gao, Subhrajit Bhattacharya, Leiming Zhang, Rick S. Blum,
Alejandro Ribeiro, Brian M. Sadler
- Abstract summary: Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
- Score: 116.26579152942162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are processing architectures that exploit graph
structural information to model representations from network data. Despite
their success, GNNs suffer from sub-optimal generalization performance given
limited training data, referred to as over-fitting. This paper proposes
Topology Adaptive Edge Dropping (TADropEdge) method as an adaptive data
augmentation technique to improve generalization performance and learn robust
GNN models. We start by explicitly analyzing how random edge dropping increases
the data diversity during training, while indicating i.i.d. edge dropping does
not account for graph structural information and could result in noisy
augmented data degrading performance. To overcome this issue, we consider graph
connectivity as the key property that captures graph topology. TADropEdge
incorporates this factor into random edge dropping such that the edge-dropped
subgraphs maintain similar topology as the underlying graph, yielding more
satisfactory data augmentation. In particular, TADropEdge first leverages the
graph spectrum to assign proper weights to graph edges, which represent their
criticality for establishing the graph connectivity. It then normalizes the
edge weights and drops graph edges adaptively based on their normalized
weights. Besides improving generalization performance, TADropEdge reduces
variance for efficient training and can be applied as a generic method modular
to different GNN models. Intensive experiments on real-life and synthetic
datasets corroborate theory and verify the effectiveness of the proposed
method.
Related papers
- ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.