Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks
- URL: http://arxiv.org/abs/2110.10482v1
- Date: Wed, 20 Oct 2021 10:47:34 GMT
- Title: Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks
- Authors: Zihan Liul, Yun Luo, Zelin Zang, Stan Z. Li
- Abstract summary: Gray-box graph attacks aim at disrupting the performance of the victim model by using attacks with limited knowledge of the victim model.
To obtain the gradient on the node attributes or graph structure, the attacker constructs an imaginary surrogate model trained under supervision.
This paper investigates the effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks.
- Score: 27.317964031440546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gray-box graph attacks aim at disrupting the performance of the victim model
by using inconspicuous attacks with limited knowledge of the victim model. The
parameters of the victim model and the labels of the test nodes are invisible
to the attacker. To obtain the gradient on the node attributes or graph
structure, the attacker constructs an imaginary surrogate model trained under
supervision. However, there is a lack of discussion on the training of
surrogate models and the robustness of provided gradient information. The
general node classification model loses the topology of the nodes on the graph,
which is, in fact, an exploitable prior for the attacker. This paper
investigates the effect of representation learning of surrogate models on the
transferability of gray-box graph adversarial attacks. To reserve the topology
in the surrogate embedding, we propose Surrogate Representation Learning with
Isometric Mapping (SRLIM). By using Isometric mapping method, our proposed
SRLIM can constrain the topological structure of nodes from the input layer to
the embedding space, that is, to maintain the similarity of nodes in the
propagation process. Experiments prove the effectiveness of our approach
through the improvement in the performance of the adversarial attacks generated
by the gradient-based attacker in untargeted poisoning gray-box setups.
Related papers
- Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Are Gradients on Graph Structure Reliable in Gray-box Attacks? [56.346504691615934]
Previous gray-box attackers employ gradients from the surrogate model to locate the vulnerable edges to perturb the graph structure.
In this paper, we discuss and analyze the errors caused by the unreliability of the structural gradients.
We propose a novel attack model with methods to reduce the errors inside the structural gradients.
arXiv Detail & Related papers (2022-08-07T06:43:32Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - Query-based Adversarial Attacks on Graph with Fake Nodes [32.67989796394633]
We propose a novel adversarial attack by introducing a set of fake nodes to the original graph.
Specifically, we query the victim model for each victim node to acquire their most adversarial feature.
Our attack is performed in a practical and unnoticeable manner.
arXiv Detail & Related papers (2021-09-27T14:19:17Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.