Understanding Structural Vulnerability in Graph Convolutional Networks
- URL: http://arxiv.org/abs/2108.06280v1
- Date: Fri, 13 Aug 2021 15:07:44 GMT
- Title: Understanding Structural Vulnerability in Graph Convolutional Networks
- Authors: Liang Chen, Jintang Li, Qibiao Peng, Yang Liu, Zibin Zheng and Carl
Yang
- Abstract summary: Graph Convolutional Networks (GCNs) are vulnerable to adversarial attacks on the graph structure.
We show that structural adversarial examples can be attributed to the non-robust aggregation scheme of GCNs.
We show that adopting the aggregation scheme with a high breakdown point could significantly enhance the robustness of GCNs against structural attacks.
- Score: 27.602802961213236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that Graph Convolutional Networks (GCNs) are
vulnerable to adversarial attacks on the graph structure. Although multiple
works have been proposed to improve their robustness against such structural
adversarial attacks, the reasons for the success of the attacks remain unclear.
In this work, we theoretically and empirically demonstrate that structural
adversarial examples can be attributed to the non-robust aggregation scheme
(i.e., the weighted mean) of GCNs. Specifically, our analysis takes advantage
of the breakdown point which can quantitatively measure the robustness of
aggregation schemes. The key insight is that weighted mean, as the basic design
of GCNs, has a low breakdown point and its output can be dramatically changed
by injecting a single edge. We show that adopting the aggregation scheme with a
high breakdown point (e.g., median or trimmed mean) could significantly enhance
the robustness of GCNs against structural attacks. Extensive experiments on
four real-world datasets demonstrate that such a simple but effective method
achieves the best robustness performance compared to state-of-the-art models.
Related papers
- Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks [54.62268052283014]
Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Graph Structure Refinement with Energy-based Contrastive Learning [56.957793274727514]
We introduce an unsupervised method based on a joint of generative training and discriminative training to learn graph structure and representation.
We propose an Energy-based Contrastive Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as ECL-GSR.
ECL-GSR achieves faster training with fewer samples and memories against the leading baseline, highlighting its simplicity and efficiency in downstream tasks.
arXiv Detail & Related papers (2024-12-20T04:05:09Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - On provable privacy vulnerabilities of graph representations [34.45433384694758]
Graph representation learning (GRL) is critical for extracting insights from complex network structures.
It also raises security concerns due to potential privacy vulnerabilities in these representations.
This paper investigates the structural vulnerabilities in graph neural models where sensitive topological information can be inferred through edge reconstruction attacks.
arXiv Detail & Related papers (2024-02-06T14:26:22Z) - Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets [3.325501850627077]
Graph Lottery Tickets (GLTs) can significantly reduce the inference latency and compute footprint compared to their dense counterparts.
Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored.
We present an adversarially robust graph sparsification framework that prunes the adjacency matrix and the GNN weights.
arXiv Detail & Related papers (2023-12-11T17:52:46Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Improving Robustness of Graph Neural Networks with Heterophily-Inspired
Designs [18.524164548051417]
Many graph neural networks (GNNs) are sensitive to adversarial attacks, and can suffer from performance loss if the graph structure is intentionally perturbed.
We show that in the standard scenario in which node features exhibit homophily, impactful structural attacks always lead to increased levels of heterophily.
We present two designs -- (i) separate aggregators for ego- and neighbor-embeddings, and (ii) a reduced scope of aggregation -- that can significantly improve the robustness of GNNs.
arXiv Detail & Related papers (2021-06-14T21:39:36Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.