Defending Distributed Classifiers Against Data Poisoning Attacks
- URL: http://arxiv.org/abs/2008.09284v1
- Date: Fri, 21 Aug 2020 03:11:23 GMT
- Title: Defending Distributed Classifiers Against Data Poisoning Attacks
- Authors: Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher
Leckie
- Abstract summary: Support Vector Machines (SVMs) are vulnerable to targeted training data manipulations.
We develop a novel defense algorithm that improves resistance against such attacks.
- Score: 26.89258745198076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Support Vector Machines (SVMs) are vulnerable to targeted training data
manipulations such as poisoning attacks and label flips. By carefully
manipulating a subset of training samples, the attacker forces the learner to
compute an incorrect decision boundary, thereby cause misclassifications.
Considering the increased importance of SVMs in engineering and life-critical
applications, we develop a novel defense algorithm that improves resistance
against such attacks. Local Intrinsic Dimensionality (LID) is a promising
metric that characterizes the outlierness of data samples. In this work, we
introduce a new approximation of LID called K-LID that uses kernel distance in
the LID calculation, which allows LID to be calculated in high dimensional
transformed spaces. We introduce a weighted SVM against such attacks using
K-LID as a distinguishing characteristic that de-emphasizes the effect of
suspicious data samples on the SVM decision boundary. Each sample is weighted
on how likely its K-LID value is from the benign K-LID distribution rather than
the attacked K-LID distribution. We then demonstrate how the proposed defense
can be applied to a distributed SVM framework through a case study on an
SDR-based surveillance system. Experiments with benchmark data sets show that
the proposed defense reduces classification error rates substantially (10% on
average).
Related papers
- Leveraging MTD to Mitigate Poisoning Attacks in Decentralized FL with Non-IID Data [9.715501137911552]
This paper proposes a framework that employs the Moving Target Defense (MTD) approach to bolster the robustness of DFL models.
By continuously modifying the attack surface of the DFL system, this framework aims to mitigate poisoning attacks effectively.
arXiv Detail & Related papers (2024-09-28T10:09:37Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Local Intrinsic Dimensionality Signals Adversarial Perturbations [28.328973408891834]
Local dimensionality (LID) is a local metric that describes the minimum number of latent variables required to describe each data point.
In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.
arXiv Detail & Related papers (2021-09-24T08:29:50Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Defending Regression Learners Against Poisoning Attacks [25.06658793731661]
We introduce a novel Local Intrinsic Dimensionality (LID) based measure called N-LID that measures the local deviation of a given data point's LID with respect to its neighbors.
N-LID can distinguish poisoned samples from normal samples and propose an N-LID based defense approach that makes no assumptions of the attacker.
We show that the proposed defense mechanism outperforms the state of the art defenses in terms of prediction accuracy (up to 76% lower MSE compared to an undefended ridge model) and running time.
arXiv Detail & Related papers (2020-08-21T03:02:58Z) - Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
Approach [27.503734504441365]
Adversarial machine learning has attracted a great amount of attention in recent years.
In this paper, we consider defending SVM against poisoning attacks.
We study two commonly used strategies for defending: designing robust SVM algorithms and data sanitization.
arXiv Detail & Related papers (2020-06-14T01:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.