Defending Graph Convolutional Networks against Dynamic Graph
Perturbations via Bayesian Self-supervision
- URL: http://arxiv.org/abs/2203.03762v1
- Date: Mon, 7 Mar 2022 22:57:43 GMT
- Title: Defending Graph Convolutional Networks against Dynamic Graph
Perturbations via Bayesian Self-supervision
- Authors: Jun Zhuang, Mohammad Al Hasan
- Abstract summary: Graph Convolutional Networks (GCNs) achieve extraordinary accomplishments on the node classification task.
GCNs may be vulnerable to adversarial attacks on label-scarce dynamic graphs.
We propose a novel Bayesian self-supervision model, namely GraphSS, to address the issue.
- Score: 5.037076816350975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, plentiful evidence illustrates that Graph Convolutional
Networks (GCNs) achieve extraordinary accomplishments on the node
classification task. However, GCNs may be vulnerable to adversarial attacks on
label-scarce dynamic graphs. Many existing works aim to strengthen the
robustness of GCNs; for instance, adversarial training is used to shield GCNs
against malicious perturbations. However, these works fail on dynamic graphs
for which label scarcity is a pressing issue. To overcome label scarcity,
self-training attempts to iteratively assign pseudo-labels to highly confident
unlabeled nodes but such attempts may suffer serious degradation under dynamic
graph perturbations. In this paper, we generalize noisy supervision as a kind
of self-supervised learning method and then propose a novel Bayesian
self-supervision model, namely GraphSS, to address the issue. Extensive
experiments demonstrate that GraphSS can not only affirmatively alert the
perturbations on dynamic graphs but also effectively recover the prediction of
a node classifier when the graph is under such perturbations. These two
advantages prove to be generalized over three classic GCNs across five public
graph datasets.
Related papers
- RobGC: Towards Robust Graph Condensation [61.259453496191696]
Graph neural networks (GNNs) have attracted widespread attention for their impressive capability of graph representation learning.
However, the increasing prevalence of large-scale graphs presents a significant challenge for GNN training due to their computational demands.
We propose graph condensation (GC) to generate an informative compact graph that enables efficient training of GNNs while retaining performance.
arXiv Detail & Related papers (2024-06-19T04:14:57Z) - Learning on Graphs under Label Noise [5.909452203428086]
We develop a novel approach dubbed Consistent Graph Neural Network (CGNN) to solve the problem of learning on graphs with label noise.
Specifically, we employ graph contrastive learning as a regularization term, which promotes two views of augmented nodes to have consistent representations.
To detect noisy labels on the graph, we present a sample selection technique based on the homophily assumption.
arXiv Detail & Related papers (2023-06-14T01:38:01Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Attention-Driven Dynamic Graph Convolutional Network for Multi-Label
Image Recognition [53.17837649440601]
We propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) to dynamically generate a specific graph for each image.
Experiments on public multi-label benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-12-05T10:10:12Z) - Deperturbation of Online Social Networks via Bayesian Label Transition [5.037076816350975]
Online social networks (OSNs) classify users into different categories based on their online activities and interests.
A small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task.
We develop a GCN defense model, namely GraphLT, which uses the concept of label transition.
arXiv Detail & Related papers (2020-10-27T08:15:12Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.