Query-free Black-box Adversarial Attacks on Graphs
- URL: http://arxiv.org/abs/2012.06757v1
- Date: Sat, 12 Dec 2020 08:52:56 GMT
- Title: Query-free Black-box Adversarial Attacks on Graphs
- Authors: Jiarong Xu, Yizhou Sun, Xin Jiang, Yanhao Wang, Yang Yang, Chunping
Wang, Jiangang Lu
- Abstract summary: We propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model.
We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory.
Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well.
- Score: 37.88689315688314
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many graph-based machine learning models are known to be vulnerable to
adversarial attacks, where even limited perturbations on input data can result
in dramatic performance deterioration. Most existing works focus on moderate
settings in which the attacker is either aware of the model structure and
parameters (white-box), or able to send queries to fetch model information. In
this paper, we propose a query-free black-box adversarial attack on graphs, in
which the attacker has no knowledge of the target model and no query access to
the model. With the mere observation of the graph topology, the proposed attack
strategy flips a limited number of links to mislead the graph models. We prove
that the impact of the flipped links on the target model can be quantified by
spectral changes, and thus be approximated using the eigenvalue perturbation
theory. Accordingly, we model the proposed attack strategy as an optimization
problem, and adopt a greedy algorithm to select the links to be flipped. Due to
its simplicity and scalability, the proposed model is not only generic in
various graph-based models, but can be easily extended when different knowledge
levels are accessible as well. Extensive experiments demonstrate the
effectiveness and efficiency of the proposed model on various downstream tasks,
as well as several different graph-based learning models.
Related papers
- Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted
Attacks [30.863450425927613]
We study the black-box targeted attack problem from the model discrepancy perspective.
We present a generalization error bound for black-box targeted attacks, which gives a rigorous theoretical analysis for guaranteeing the success of the attack.
We derive a new algorithm for black-box targeted attacks based on our theoretical analysis.
arXiv Detail & Related papers (2022-12-18T08:19:08Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Explain2Attack: Text Adversarial Attacks via Cross-Domain
Interpretability [18.92690624514601]
Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans.
In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task.
We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.
arXiv Detail & Related papers (2020-10-14T04:56:41Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Orthogonal Deep Models As Defense Against Black-Box Attacks [71.23669614195195]
We study the inherent weakness of deep models in black-box settings where the attacker may develop the attack using a model similar to the targeted model.
We introduce a novel gradient regularization scheme that encourages the internal representation of a deep model to be orthogonal to another.
We verify the effectiveness of our technique on a variety of large-scale models.
arXiv Detail & Related papers (2020-06-26T08:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.