Toward Enhanced Robustness in Unsupervised Graph Representation
Learning: A Graph Information Bottleneck Perspective
- URL: http://arxiv.org/abs/2201.08557v2
- Date: Thu, 8 Jun 2023 08:35:07 GMT
- Title: Toward Enhanced Robustness in Unsupervised Graph Representation
Learning: A Graph Information Bottleneck Perspective
- Authors: Jihong Wang, Minnan Luo, Jundong Li, Ziqi Liu, Jun Zhou, Qinghua Zheng
- Abstract summary: We propose a novel unbiased robust UGRL method called Robust Graph Information Bottleneck (RGIB)
Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph.
- Score: 48.01303380298564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have revealed that GNNs are vulnerable to adversarial attacks.
Most existing robust graph learning methods measure model robustness based on
label information, rendering them infeasible when label information is not
available. A straightforward direction is to employ the widely used Infomax
technique from typical Unsupervised Graph Representation Learning (UGRL) to
learn robust unsupervised representations. Nonetheless, directly transplanting
the Infomax technique from typical UGRL to robust UGRL may involve a biased
assumption. In light of the limitation of Infomax, we propose a novel unbiased
robust UGRL method called Robust Graph Information Bottleneck (RGIB), which is
grounded in the Information Bottleneck (IB) principle. Our RGIB attempts to
learn robust node representations against adversarial perturbations by
preserving the original information in the benign graph while eliminating the
adversarial information in the adversarial graph. There are mainly two
challenges to optimize RGIB: 1) high complexity of adversarial attack to
perturb node features and graph structure jointly in the training procedure; 2)
mutual information estimation upon adversarially attacked graphs. To tackle
these problems, we further propose an efficient adversarial training strategy
with only feature perturbations and an effective mutual information estimator
with subgraph-level summary. Moreover, we theoretically establish a connection
between our proposed RGIB and the robustness of downstream classifiers,
revealing that RGIB can provide a lower bound on the adversarial risk of
downstream classifiers. Extensive experiments over several benchmarks and
downstream tasks demonstrate the effectiveness and superiority of our proposed
method.
Related papers
- Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.