Self-Guided Robust Graph Structure Refinement
- URL: http://arxiv.org/abs/2402.11837v2
- Date: Sat, 2 Mar 2024 05:43:05 GMT
- Title: Self-Guided Robust Graph Structure Refinement
- Authors: Yeonjun In, Kanghoon Yoon, Kibum Kim, Kijung Shin, and Chanyoung Park
- Abstract summary: We propose a self-guided graph structure refinement (GSR) framework to defend GNNs against adversarial attacks.
In this paper, we demonstrate the effectiveness of SG-GSR under various scenarios including non-targeted attacks, targeted attacks, feature attacks, e-commerce fraud, and noisy node labels.
- Score: 37.235898707554284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have revealed that GNNs are vulnerable to adversarial attacks.
To defend against such attacks, robust graph structure refinement (GSR) methods
aim at minimizing the effect of adversarial edges based on node features, graph
structure, or external information. However, we have discovered that existing
GSR methods are limited by narrowassumptions, such as assuming clean node
features, moderate structural attacks, and the availability of external clean
graphs, resulting in the restricted applicability in real-world scenarios. In
this paper, we propose a self-guided GSR framework (SG-GSR), which utilizes a
clean sub-graph found within the given attacked graph itself. Furthermore, we
propose a novel graph augmentation and a group-training strategy to handle the
two technical challenges in the clean sub-graph extraction: 1) loss of
structural information, and 2) imbalanced node degree distribution. Extensive
experiments demonstrate the effectiveness of SG-GSR under various scenarios
including non-targeted attacks, targeted attacks, feature attacks, e-commerce
fraud, and noisy node labels. Our code is available at
https://github.com/yeonjun-in/torch-SG-GSR.
Related papers
- Relaxing Graph Transformers for Adversarial Attacks [49.450581960551276]
Graph Transformers (GTs) surpassed Message-Passing GNNs on several benchmarks, their adversarial robustness properties are unexplored.
We overcome these challenges by targeting three representative architectures based on (1) random-walk PEs, (2) pair-wise-short-paths, and (3) spectral perturbations.
Our evaluation reveals that they can be catastrophically fragile and underlines our work's importance and the necessity for adaptive attacks.
arXiv Detail & Related papers (2024-07-16T14:24:58Z) - Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets [3.325501850627077]
Graph Lottery Tickets (GLTs) can significantly reduce the inference latency and compute footprint compared to their dense counterparts.
Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored.
We present an adversarially robust graph sparsification framework that prunes the adjacency matrix and the GNN weights.
arXiv Detail & Related papers (2023-12-11T17:52:46Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - EDoG: Adversarial Edge Detection For Graph Neural Networks [17.969573886307906]
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks.
Recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations.
We propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
arXiv Detail & Related papers (2022-12-27T20:42:36Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.