Reducing Certified Regression to Certified Classification
- URL: http://arxiv.org/abs/2208.13904v1
- Date: Mon, 29 Aug 2022 21:52:41 GMT
- Title: Reducing Certified Regression to Certified Classification
- Authors: Zayd Hammoudeh, Daniel Lowd
- Abstract summary: This work investigates certified regression defenses.
They provide guaranteed limits on how much a regressor's prediction may change under a training-set attack.
We propose six new provably-robust regressors.
- Score: 11.663072799764542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training instances can severely distort a model's behavior. This
work investigates certified regression defenses, which provide guaranteed
limits on how much a regressor's prediction may change under a training-set
attack. Our key insight is that certified regression reduces to certified
classification when using median as a model's primary decision function.
Coupling our reduction with existing certified classifiers, we propose six new
provably-robust regressors. To the extent of our knowledge, this is the first
work that certifies the robustness of individual regression predictions without
any assumptions about the data distribution and model architecture. We also
show that existing state-of-the-art certified classifiers often make
overly-pessimistic assumptions that can degrade their provable guarantees. We
introduce a tighter analysis of model robustness, which in many cases results
in significantly improved certified guarantees. Lastly, we empirically
demonstrate our approaches' effectiveness on both regression and classification
data, where the accuracy of up to 50% of test predictions can be guaranteed
under 1% training-set corruption and up to 30% of predictions under 4%
corruption. Our source code is available at
https://github.com/ZaydH/certified-regression.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Outlier detection by ensembling uncertainty with negative objectness [0.0]
Outlier detection is an essential capability in safety-critical applications of supervised visual recognition.
We reconsider direct prediction of K+1 logits that correspond to K groundtruth classes and one outlier class.
We embed our method into a dense prediction architecture with mask-level recognition over K+2 classes.
arXiv Detail & Related papers (2024-02-23T15:19:37Z) - Domain-adaptive and Subgroup-specific Cascaded Temperature Regression
for Out-of-distribution Calibration [16.930766717110053]
We propose a novel meta-set-based cascaded temperature regression method for post-hoc calibration.
We partition each meta-set into subgroups based on predicted category and confidence level, capturing diverse uncertainties.
A regression network is then trained to derive category-specific and confidence-level-specific scaling, achieving calibration across meta-sets.
arXiv Detail & Related papers (2024-02-14T14:35:57Z) - It's Simplex! Disaggregating Measures to Improve Certified Robustness [32.63920797751968]
This work presents two approaches to improve the analysis of certification mechanisms.
New certification approaches have the potential to more than double the achievable radius of certification.
Empirical evaluation verifies that our new approach can certify $9%$ more samples at noise scale $sigma = 1$.
arXiv Detail & Related papers (2023-09-20T02:16:19Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.