Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks
- URL: http://arxiv.org/abs/2002.10947v1
- Date: Tue, 25 Feb 2020 15:17:58 GMT
- Title: Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks
- Authors: Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya
Kailkhura, Xue Lin
- Abstract summary: Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
- Score: 96.93500886136532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have made significant advances on several
fundamental inference tasks. As a result, there is a surge of interest in using
these models for making potentially important decisions in high-regret
applications. However, despite GNNs' impressive performance, it has been
observed that carefully crafted perturbations on graph structures (or nodes
attributes) lead them to make wrong predictions. Presence of these adversarial
examples raises serious security concerns. Most of the existing robust GNN
design/training methods are only applicable to white-box settings where model
parameters are known and gradient based methods can be used by performing
convex relaxation of the discrete graph domain. More importantly, these methods
are not efficient and scalable which make them infeasible in time sensitive
tasks and massive graph datasets. To overcome these limitations, we propose a
general framework which leverages the greedy search algorithms and zeroth-order
methods to obtain robust GNNs in a generic and an efficient manner. On several
applications, we show that the proposed techniques are significantly less
computationally expensive and, in some cases, more robust than the
state-of-the-art methods making them suitable to large-scale problems which
were out of the reach of traditional robust training methods.
Related papers
- Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks [37.41604955004456]
Graph neural networks (GNNs) have demonstrated remarkable success in graph representation learning.
Various sampling approaches have been proposed to scale GNNs to applications with large-scale graphs.
arXiv Detail & Related papers (2024-10-07T18:29:02Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Robust Graph Neural Network based on Graph Denoising [10.564653734218755]
Graph Neural Networks (GNNs) have emerged as a notorious alternative to address learning problems dealing with non-Euclidean datasets.
This work proposes a robust implementation of GNNs that explicitly accounts for the presence of perturbations in the observed topology.
arXiv Detail & Related papers (2023-12-11T17:43:57Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Scalable Adversarial Attack on Graph Neural Networks with Alternating
Direction Method of Multipliers [17.09807200410981]
We propose SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM)
We show that SAG can significantly reduce the computation and memory overhead compared with the state-of-the-art approach.
arXiv Detail & Related papers (2020-09-22T00:33:36Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.