PAC Confidence Predictions for Deep Neural Network Classifiers
- URL: http://arxiv.org/abs/2011.00716v5
- Date: Wed, 17 Mar 2021 19:51:37 GMT
- Title: PAC Confidence Predictions for Deep Neural Network Classifiers
- Authors: Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani
- Abstract summary: Key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty.
We propose an algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees.
- Score: 28.61937254015157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key challenge for deploying deep neural networks (DNNs) in safety critical
settings is the need to provide rigorous ways to quantify their uncertainty. In
this paper, we propose a novel algorithm for constructing predicted
classification confidences for DNNs that comes with provable correctness
guarantees. Our approach uses Clopper-Pearson confidence intervals for the
Binomial distribution in conjunction with the histogram binning approach to
calibrated prediction. In addition, we demonstrate how our predicted
confidences can be used to enable downstream guarantees in two settings: (i)
fast DNN inference, where we demonstrate how to compose a fast but inaccurate
DNN with an accurate but slow DNN in a rigorous way to improve performance
without sacrificing accuracy, and (ii) safe planning, where we guarantee safety
when using a DNN to predict whether a given action is safe based on visual
observations. In our experiments, we demonstrate that our approach can be used
to provide guarantees for state-of-the-art DNNs.
Related papers
- Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Online Black-Box Confidence Estimation of Deep Neural Networks [0.0]
We introduce the neighborhood confidence (NHC) which estimates the confidence of an arbitrary DNN for classification.
The metric can be used for black-box systems since only the top-1 class output is required and does not need access to the gradients.
Evaluation on different data distributions, including small in-domain distribution shifts, out-of-domain data or adversarial attacks, shows that the NHC performs better or on par with a comparable method for online white-box confidence estimation.
arXiv Detail & Related papers (2023-02-27T08:30:46Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Increasing Trustworthiness of Deep Neural Networks via Accuracy
Monitoring [20.456742449675904]
Inference accuracy of deep neural networks (DNNs) is a crucial performance metric, but can vary greatly in practice subject to actual test datasets.
This has raised significant concerns with trustworthiness of DNNs, especially in safety-critical applications.
We propose a neural network-based accuracy monitor model, which only takes the deployed DNN's softmax probability output as its input.
arXiv Detail & Related papers (2020-07-03T03:09:36Z) - Interval Neural Networks: Uncertainty Scores [11.74565957328407]
We propose a fast, non-Bayesian method for producing uncertainty scores in the output of pre-trained deep neural networks (DNNs)
This interval neural network (INN) has interval valued parameters and propagates its input using interval arithmetic.
In numerical experiments on an image reconstruction task, we demonstrate the practical utility of INNs as a proxy for the prediction error.
arXiv Detail & Related papers (2020-03-25T18:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.