Topological Effects on Attacks Against Vertex Classification
- URL: http://arxiv.org/abs/2003.05822v1
- Date: Thu, 12 Mar 2020 14:37:57 GMT
- Title: Topological Effects on Attacks Against Vertex Classification
- Authors: Benjamin A. Miller and Mustafa \c{C}amurcu and Alexander J. Gomez and
Kevin Chan and Tina Eliassi-Rad
- Abstract summary: This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful.
We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget.
Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes.
- Score: 61.62383779296796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertex classification is vulnerable to perturbations of both graph topology
and vertex attributes, as shown in recent research. As in other machine
learning domains, concerns about robustness to adversarial manipulation can
prevent potential users from adopting proposed methods when the consequence of
action is very high. This paper considers two topological characteristics of
graphs and explores the way these features affect the amount the adversary must
perturb the graph in order to be successful. We show that, if certain vertices
are included in the training set, it is possible to substantially an
adversary's required perturbation budget. On four citation datasets, we
demonstrate that if the training set includes high degree vertices or vertices
that ensure all unlabeled nodes have neighbors in the training set, we show
that the adversary's budget often increases by a substantial factor---often a
factor of 2 or more---over random training for the Nettack poisoning attack.
Even for especially easy targets (those that are misclassified after just one
or two perturbations), the degradation of performance is much slower, assigning
much lower probabilities to the incorrect classes. In addition, we demonstrate
that this robustness either persists when recently proposed defenses are
applied, or is competitive with the resulting performance improvement for the
defender.
Related papers
- Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Enhancing Robust Representation in Adversarial Training: Alignment and
Exclusion Criteria [61.048842737581865]
We show that Adversarial Training (AT) omits to learning robust features, resulting in poor performance of adversarial robustness.
We propose a generic framework of AT to gain robust representation, by the asymmetric negative contrast and reverse attention.
Empirical evaluations on three benchmark datasets show our methods greatly advance the robustness of AT and achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-10-05T07:29:29Z) - Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning [37.4570186471298]
We study the problem of black-box node injection attack, without training a potentially misleading surrogate model.
By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets.
We demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers.
arXiv Detail & Related papers (2022-11-19T19:37:22Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Detection and Mitigation of Byzantine Attacks in Distributed Training [24.951227624475443]
An abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference.
Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients.
In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities.
arXiv Detail & Related papers (2022-08-17T05:49:52Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.