Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
Approach
- URL: http://arxiv.org/abs/2006.07757v5
- Date: Sat, 20 Feb 2021 12:36:59 GMT
- Title: Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
Approach
- Authors: Hu Ding, Fan Yang, Jiawei Huang
- Abstract summary: Adversarial machine learning has attracted a great amount of attention in recent years.
In this paper, we consider defending SVM against poisoning attacks.
We study two commonly used strategies for defending: designing robust SVM algorithms and data sanitization.
- Score: 27.503734504441365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial machine learning has attracted a great amount of attention in
recent years. In a poisoning attack, the adversary can inject a small number of
specially crafted samples into the training data which make the decision
boundary severely deviate and cause unexpected misclassification. Due to the
great importance and popular use of support vector machines (SVM), we consider
defending SVM against poisoning attacks in this paper. We study two commonly
used strategies for defending: designing robust SVM algorithms and data
sanitization. Though several robust SVM algorithms have been proposed before,
most of them either are in lack of adversarial-resilience, or rely on strong
assumptions about the data distribution or the attacker's behavior. Moreover,
the research on their complexities is still quite limited. We are the first, to
the best of our knowledge, to prove that even the simplest hard-margin
one-class SVM with outliers problem is NP-complete, and has no fully PTAS
unless P$=$NP (that means it is hard to achieve an even approximate algorithm).
For the data sanitization defense, we link it to the intrinsic dimensionality
of data; in particular, we provide a sampling theorem in doubling metrics for
explaining the effectiveness of DBSCAN (as a density-based outlier removal
method) for defending against poisoning attacks. In our empirical experiments,
we compare several defenses including the DBSCAN and robust SVM methods, and
investigate the influences from the intrinsic dimensionality and data density
to their performances.
Related papers
- DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - Evaluating robustness of support vector machines with the Lagrangian
dual approach [6.868150350359336]
We propose a method to improve the verification performance for vector machines (SVMs) with nonlinear kernels.
We evaluate the adversarial robustness of SVMs with linear and nonlinear kernels on the MNIST and Fashion-MNIST datasets.
The experimental results show that the percentage of provable robustness obtained by our method on the test set is better than that of the state-of-the-art.
arXiv Detail & Related papers (2023-06-05T07:15:54Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Lethal Dose Conjecture on Data Poisoning [122.83280749890078]
Data poisoning considers an adversary that distorts the training set of machine learning algorithms for malicious purposes.
In this work, we bring to light one conjecture regarding the fundamentals of data poisoning, which we call the Lethal Dose Conjecture.
arXiv Detail & Related papers (2022-08-05T17:53:59Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Fast and Scalable Adversarial Training of Kernel SVM via Doubly
Stochastic Gradients [34.98827928892501]
Adversarial attacks by generating examples which are almost indistinguishable from natural examples, pose a serious threat to learning models.
Support vector machine (SVM) is a classical yet still important learning algorithm even in the current deep learning era.
We propose adv-SVM to improve its adversarial robustness via adversarial training, which has been demonstrated to be the most promising defense techniques.
arXiv Detail & Related papers (2021-07-21T08:15:32Z) - Defening against Adversarial Denial-of-Service Attacks [0.0]
Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies.
We propose a new approach of detecting DoS poisoned instances.
We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances.
arXiv Detail & Related papers (2021-04-14T09:52:36Z) - Defending Distributed Classifiers Against Data Poisoning Attacks [26.89258745198076]
Support Vector Machines (SVMs) are vulnerable to targeted training data manipulations.
We develop a novel defense algorithm that improves resistance against such attacks.
arXiv Detail & Related papers (2020-08-21T03:11:23Z) - On Adversarial Examples and Stealth Attacks in Artificial Intelligence
Systems [62.997667081978825]
We present a formal framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems.
The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification.
The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself.
arXiv Detail & Related papers (2020-04-09T10:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.