Robust Mid-Pass Filtering Graph Convolutional Networks
- URL: http://arxiv.org/abs/2302.08048v1
- Date: Thu, 16 Feb 2023 03:07:09 GMT
- Title: Robust Mid-Pass Filtering Graph Convolutional Networks
- Authors: Jincheng Huang and Lun Du and Xu Chen and Qiang Fu and Shi Han and
Dongmei Zhang
- Abstract summary: Graph convolutional networks (GCNs) are currently the most promising paradigm for dealing with graph-structure data.
Recent studies have also shown that GCNs are vulnerable to adversarial attacks.
We propose a simple yet effective Mid-pass filter GCN (Mid-GCN) to overcome these challenges.
- Score: 47.50194731200042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) are currently the most promising paradigm
for dealing with graph-structure data, while recent studies have also shown
that GCNs are vulnerable to adversarial attacks. Thus developing GCN models
that are robust to such attacks become a hot research topic. However, the
structural purification learning-based or robustness constraints-based defense
GCN methods are usually designed for specific data or attacks, and introduce
additional objective that is not for classification. Extra training overhead is
also required in their design. To address these challenges, we conduct in-depth
explorations on mid-frequency signals on graphs and propose a simple yet
effective Mid-pass filter GCN (Mid-GCN). Theoretical analyses guarantee the
robustness of signals through the mid-pass filter, and we also shed light on
the properties of different frequency signals under adversarial attacks.
Extensive experiments on six benchmark graph data further verify the
effectiveness of our designed Mid-GCN in node classification accuracy compared
to state-of-the-art GCNs under various adversarial attack strategies.
Related papers
- Certifying Robustness of Graph Convolutional Networks for Node Perturbation with Polyhedra Abstract Interpretation [3.0560105799516046]
Graph convolutional neural networks (GCNs) are powerful tools for learning graph-based knowledge representations from training data.
GCNs are vulnerable to small perturbations in the input graph, which makes them susceptible to input faults or adversarial attacks.
We propose an improved GCN robustness certification technique for node classification in the presence of node feature perturbations.
arXiv Detail & Related papers (2024-05-14T14:21:55Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Understanding Structural Vulnerability in Graph Convolutional Networks [27.602802961213236]
Graph Convolutional Networks (GCNs) are vulnerable to adversarial attacks on the graph structure.
We show that structural adversarial examples can be attributed to the non-robust aggregation scheme of GCNs.
We show that adopting the aggregation scheme with a high breakdown point could significantly enhance the robustness of GCNs against structural attacks.
arXiv Detail & Related papers (2021-08-13T15:07:44Z) - Spatio-Temporal Sparsification for General Robust Graph Convolution
Networks [16.579675313683627]
Graph Neural Networks (GNNs) have attracted increasing attention due to its successful applications on various graph-structure data.
Recent studies have shown that adversarial attacks are threatening the functionality of GNNs.
We propose to defend adversarial attacks on GNN through applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN hidden node representation.
arXiv Detail & Related papers (2021-03-23T02:03:11Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.