AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack
- URL: http://arxiv.org/abs/2005.03482v6
- Date: Thu, 17 Jun 2021 01:41:29 GMT
- Title: AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack
- Authors: Ao Liu, Beibei Li, Tao Li, Pan Zhou, Rui wang
- Abstract summary: Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
- Score: 53.06334363586119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have revealed the vulnerability of graph convolutional
networks (GCNs) to edge-perturbing attacks, such as maliciously inserting or
deleting graph edges. However, a theoretical proof of such vulnerability
remains a big challenge, and effective defense schemes are still open issues.
In this paper, we first generalize the formulation of edge-perturbing attacks
and strictly prove the vulnerability of GCNs to such attacks in node
classification tasks. Following this, an anonymous graph convolutional network,
named AN-GCN, is proposed to counter against edge-perturbing attacks.
Specifically, we present a node localization theorem to demonstrate how the GCN
locates nodes during its training phase. In addition, we design a staggered
Gaussian noise based node position generator, and devise a spectral graph
convolution based discriminator in detecting the generated node positions.
Further, we give the optimization of the above generator and discriminator.
AN-GCN can classify nodes without taking their position as input. It is
demonstrated that the AN-GCN is secure against edge-perturbing attacks in node
classification tasks, as AN-GCN classifies nodes without the edge information
and thus makes it impossible for attackers to perturb edges anymore. Extensive
evaluations demonstrated the effectiveness of the general edge-perturbing
attack model in manipulating the classification results of the target nodes.
More importantly, the proposed AN-GCN can achieve 82.7% in node classification
accuracy without the edge-reading permission, which outperforms the
state-of-the-art GCN.
Related papers
- Cost Aware Untargeted Poisoning Attack against Graph Neural Networks, [5.660584039688214]
We propose a novel attack loss framework called the Cost Aware Poisoning Attack (CA-attack) to improve the allocation of the attack budget.
Our experiments demonstrate that the proposed CA-attack significantly enhances existing attack strategies.
arXiv Detail & Related papers (2023-12-12T10:54:02Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Robust Mid-Pass Filtering Graph Convolutional Networks [47.50194731200042]
Graph convolutional networks (GCNs) are currently the most promising paradigm for dealing with graph-structure data.
Recent studies have also shown that GCNs are vulnerable to adversarial attacks.
We propose a simple yet effective Mid-pass filter GCN (Mid-GCN) to overcome these challenges.
arXiv Detail & Related papers (2023-02-16T03:07:09Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - Deperturbation of Online Social Networks via Bayesian Label Transition [5.037076816350975]
Online social networks (OSNs) classify users into different categories based on their online activities and interests.
A small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task.
We develop a GCN defense model, namely GraphLT, which uses the concept of label transition.
arXiv Detail & Related papers (2020-10-27T08:15:12Z) - Investigating and Mitigating Degree-Related Biases in Graph
Convolutional Networks [62.8504260693664]
Graph Convolutional Networks (GCNs) show promising results for semisupervised learning tasks on graphs.
In this paper, we analyze GCNs in regard to the node degree distribution.
We develop a novel Self-Supervised DegreeSpecific GCN (SL-DSGC) that mitigates the degree biases of GCNs.
arXiv Detail & Related papers (2020-06-28T16:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.