Robustly-reliable learners under poisoning attacks
- URL: http://arxiv.org/abs/2203.04160v1
- Date: Tue, 8 Mar 2022 15:43:33 GMT
- Title: Robustly-reliable learners under poisoning attacks
- Authors: Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma
- Abstract summary: We show how to achieve strong robustness guarantees in the face of such attacks across multiple axes.
We provide robustly-reliable predictions, in which the predicted label is guaranteed to be correct so long as the adversary has not exceeded a given corruption budget.
Remarkably we provide a complete characterization of learnability in this setting, in particular, nearly-tight matching upper and lower bounds on the region that can be certified.
- Score: 38.55373038919402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data poisoning attacks, in which an adversary corrupts a training set with
the goal of inducing specific desired mistakes, have raised substantial
concern: even just the possibility of such an attack can make a user no longer
trust the results of a learning system. In this work, we show how to achieve
strong robustness guarantees in the face of such attacks across multiple axes.
We provide robustly-reliable predictions, in which the predicted label is
guaranteed to be correct so long as the adversary has not exceeded a given
corruption budget, even in the presence of instance targeted attacks, where the
adversary knows the test example in advance and aims to cause a specific
failure on that example. Our guarantees are substantially stronger than those
in prior approaches, which were only able to provide certificates that the
prediction of the learning algorithm does not change, as opposed to certifying
that the prediction is correct, as we are able to achieve in our work.
Remarkably, we provide a complete characterization of learnability in this
setting, in particular, nearly-tight matching upper and lower bounds on the
region that can be certified, as well as efficient algorithms for computing
this region given an ERM oracle. Moreover, for the case of linear separators
over logconcave distributions, we provide efficient truly polynomial time
algorithms (i.e., non-oracle algorithms) for such robustly-reliable
predictions.
We also extend these results to the active setting where the algorithm
adaptively asks for labels of specific informative examples, and the difficulty
is that the adversary might even be adaptive to this interaction, as well as to
the agnostic learning setting where there is no perfect classifier even over
the uncorrupted data.
Related papers
- Adversarial Attacks Against Uncertainty Quantification [10.655660123083607]
This work focuses on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate.
In particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator.
arXiv Detail & Related papers (2023-09-19T12:54:09Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Fair Classification with Adversarial Perturbations [35.030329189029246]
We study fair classification in the presence of an omniscient adversary that, given an $eta$, is allowed to choose an arbitrary $eta$-fraction of the training samples and arbitrarily perturb their protected attributes.
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy.
arXiv Detail & Related papers (2021-06-10T17:56:59Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Domain Knowledge Alleviates Adversarial Attacks in Multi-Label
Classifiers [34.526394646264734]
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied.
In this paper, we shift the attention to multi-label classification, where the availability of domain knowledge may offer a natural way to spot incoherent predictions.
We explore this intuition in a framework in which first-order logic knowledge is converted into constraints and injected into a semi-supervised learning problem.
arXiv Detail & Related papers (2020-06-06T10:24:54Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.