Are Gradients on Graph Structure Reliable in Gray-box Attacks?
- URL: http://arxiv.org/abs/2208.05514v1
- Date: Sun, 7 Aug 2022 06:43:32 GMT
- Title: Are Gradients on Graph Structure Reliable in Gray-box Attacks?
- Authors: Zihan Liu, Yun Luo, Lirong Wu, Siyuan Li, Zicheng Liu, Stan Z. Li
- Abstract summary: Previous gray-box attackers employ gradients from the surrogate model to locate the vulnerable edges to perturb the graph structure.
In this paper, we discuss and analyze the errors caused by the unreliability of the structural gradients.
We propose a novel attack model with methods to reduce the errors inside the structural gradients.
- Score: 56.346504691615934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph edge perturbations are dedicated to damaging the prediction of graph
neural networks by modifying the graph structure. Previous gray-box attackers
employ gradients from the surrogate model to locate the vulnerable edges to
perturb the graph structure. However, unreliability exists in gradients on
graph structures, which is rarely studied by previous works. In this paper, we
discuss and analyze the errors caused by the unreliability of the structural
gradients. These errors arise from rough gradient usage due to the discreteness
of the graph structure and from the unreliability in the meta-gradient on the
graph structure. In order to address these problems, we propose a novel attack
model with methods to reduce the errors inside the structural gradients. We
propose edge discrete sampling to select the edge perturbations associated with
hierarchical candidate selection to ensure computational efficiency. In
addition, semantic invariance and momentum gradient ensemble are proposed to
address the gradient fluctuation on semantic-augmented graphs and the
instability of the surrogate model. Experiments are conducted in untargeted
gray-box poisoning scenarios and demonstrate the improvement in the performance
of our approach.
Related papers
- Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - How to guess a gradient [68.98681202222664]
We show that gradients are more structured than previously thought.
Exploiting this structure can significantly improve gradient-free optimization schemes.
We highlight new challenges in overcoming the large gap between optimizing with exact gradients and guessing the gradients.
arXiv Detail & Related papers (2023-12-07T21:40:44Z) - Addressing Heterophily in Node Classification with Graph Echo State
Networks [11.52174067809364]
We address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification.
GESN is a reservoir computing model for graphs, where node embeddings are computed by an untrained message-passing function.
Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models.
arXiv Detail & Related papers (2023-05-14T19:42:31Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks [27.317964031440546]
Gray-box graph attacks aim at disrupting the performance of the victim model by using attacks with limited knowledge of the victim model.
To obtain the gradient on the node attributes or graph structure, the attacker constructs an imaginary surrogate model trained under supervision.
This paper investigates the effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks.
arXiv Detail & Related papers (2021-10-20T10:47:34Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Certified Robustness of Graph Classification against Topology Attack
with Randomized Smoothing [22.16111584447466]
Graph-based machine learning models are vulnerable to adversarial perturbations due to the non i.i.d nature of graph data.
We build a smoothed graph classification model with certified robustness guarantee.
We also evaluate the effectiveness of our approach under graph convolutional network (GCN) based multi-class graph classification model.
arXiv Detail & Related papers (2020-09-12T22:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.