Improved Certified Defenses against Data Poisoning with (Deterministic)
Finite Aggregation
- URL: http://arxiv.org/abs/2202.02628v1
- Date: Sat, 5 Feb 2022 20:08:58 GMT
- Title: Improved Certified Defenses against Data Poisoning with (Deterministic)
Finite Aggregation
- Authors: Wenxiao Wang, Alexander Levine, Soheil Feizi
- Abstract summary: We propose an improved certified defense against general poisoning attacks, namely Finite Aggregation.
In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets.
We offer an alternative view of our method, bridging the designs of deterministic and aggregation-based certified defenses.
- Score: 122.83280749890078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data poisoning attacks aim at manipulating model behaviors through distorting
training data. Previously, an aggregation-based certified defense, Deep
Partition Aggregation (DPA), was proposed to mitigate this threat. DPA predicts
through an aggregation of base classifiers trained on disjoint subsets of data,
thus restricting its sensitivity to dataset distortions. In this work, we
propose an improved certified defense against general poisoning attacks, namely
Finite Aggregation. In contrast to DPA, which directly splits the training set
into disjoint subsets, our method first splits the training set into smaller
disjoint subsets and then combines duplicates of them to build larger (but not
disjoint) subsets for training base classifiers. This reduces the worst-case
impacts of poison samples and thus improves certified robustness bounds. In
addition, we offer an alternative view of our method, bridging the designs of
deterministic and stochastic aggregation-based certified defenses. Empirically,
our proposed Finite Aggregation consistently improves certificates on MNIST,
CIFAR-10, and GTSRB, boosting certified fractions by up to 3.05%, 3.87% and
4.77%, respectively, while keeping the same clean accuracies as DPA's,
effectively establishing a new state of the art in (pointwise) certified
robustness against data poisoning.
Related papers
- On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Lethal Dose Conjecture on Data Poisoning [122.83280749890078]
Data poisoning considers an adversary that distorts the training set of machine learning algorithms for malicious purposes.
In this work, we bring to light one conjecture regarding the fundamentals of data poisoning, which we call the Lethal Dose Conjecture.
arXiv Detail & Related papers (2022-08-05T17:53:59Z) - COPA: Certifying Robust Policies for Offline Reinforcement Learning
against Poisoning Attacks [49.15885037760725]
We focus on certifying the robustness of offline reinforcement learning (RL) in the presence of poisoning attacks.
We propose the first certification framework, COPA, to certify the number of poisoning trajectories that can be tolerated.
We prove that some of the proposed certification methods are theoretically tight and some are NP-Complete problems.
arXiv Detail & Related papers (2022-03-16T05:02:47Z) - A BIC based Mixture Model Defense against Data Poisoning Attacks on
Classifiers [24.53226962899903]
Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs.
We propose a novel mixture model defense against DP attacks.
arXiv Detail & Related papers (2021-05-28T01:06:09Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Deep Partition Aggregation: Provable Defense against General Poisoning
Attacks [136.79415677706612]
Adrial poisoning attacks distort training data in order to corrupt the test-time behavior of a classifier.
We propose two novel provable defenses against poisoning attacks.
DPA is a certified defense against a general poisoning threat model.
SS-DPA is a certified defense against label-flipping attacks.
arXiv Detail & Related papers (2020-06-26T03:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.