Robust Collective Classification against Structural Attacks
- URL: http://arxiv.org/abs/2007.13073v1
- Date: Sun, 26 Jul 2020 07:42:45 GMT
- Title: Robust Collective Classification against Structural Attacks
- Authors: Kai Zhou and Yevgeniy Vorobeychik
- Abstract summary: Collective learning methods exploit relations among data points to enhance classification performance.
We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks.
We show that robust AMN is much more robust than state-of-the-art deep learning methods, while sacrificing little in accuracy on non-adversarial data.
- Score: 37.630164983830184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collective learning methods exploit relations among data points to enhance
classification performance. However, such relations, represented as edges in
the underlying graphical model, expose an extra attack surface to the
adversaries. We study adversarial robustness of an important class of such
graphical models, Associative Markov Networks (AMN), to structural attacks,
where an attacker can modify the graph structure at test time. We formulate the
task of learning a robust AMN classifier as a bi-level program, where the inner
problem is a challenging non-linear integer program that computes optimal
structural changes to the AMN. To address this technical challenge, we first
relax the attacker problem, and then use duality to obtain a convex quadratic
upper bound for the robust AMN problem. We then prove a bound on the quality of
the resulting approximately optimal solutions, and experimentally demonstrate
the efficacy of our approach. Finally, we apply our approach in a transductive
learning setting, and show that robust AMN is much more robust than
state-of-the-art deep learning methods, while sacrificing little in accuracy on
non-adversarial data.
Related papers
- Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning and backdoor attacks targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Robust optimization for adversarial learning with finite sample complexity guarantees [1.8434042562191815]
In this paper we focus on linear and nonlinear classification problems and propose a novel adversarial training method for robust classifiers.
We view robustness under a data driven lens, and derive finite sample complexity bounds for both linear and non-linear classifiers in binary and multi-class scenarios.
Our algorithm minimizes a worst-case surrogate loss using Linear Programming (LP) and Second Order Cone Programming (SOCP) for linear and non-linear models.
arXiv Detail & Related papers (2024-03-22T13:49:53Z) - Robustness Analysis on Foundational Segmentation Models [28.01242494123917]
In this work, we perform a robustness analysis of Visual Foundation Models (VFMs) for segmentation tasks.
We benchmark seven state-of-the-art segmentation architectures using 2 different datasets.
Our findings reveal several key insights: VFMs exhibit vulnerabilities to compression-induced corruptions, despite not outpacing all of unimodal models in robustness, multimodal models show competitive resilience in zero-shot scenarios, and VFMs demonstrate enhanced robustness for certain object categories.
arXiv Detail & Related papers (2023-06-15T16:59:42Z) - On the Convergence and Robustness of Adversarial Training [134.25999006326916]
Adrial training with Project Gradient Decent (PGD) is amongst the most effective.
We propose a textitdynamic training strategy to increase the convergence quality of the generated adversarial examples.
Our theoretical and empirical results show the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-12-15T17:54:08Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.