Increasing Trustworthiness of Deep Neural Networks via Accuracy
Monitoring
- URL: http://arxiv.org/abs/2007.01472v1
- Date: Fri, 3 Jul 2020 03:09:36 GMT
- Title: Increasing Trustworthiness of Deep Neural Networks via Accuracy
Monitoring
- Authors: Zhihui Shao, and Jianyi Yang, and Shaolei Ren
- Abstract summary: Inference accuracy of deep neural networks (DNNs) is a crucial performance metric, but can vary greatly in practice subject to actual test datasets.
This has raised significant concerns with trustworthiness of DNNs, especially in safety-critical applications.
We propose a neural network-based accuracy monitor model, which only takes the deployed DNN's softmax probability output as its input.
- Score: 20.456742449675904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inference accuracy of deep neural networks (DNNs) is a crucial performance
metric, but can vary greatly in practice subject to actual test datasets and is
typically unknown due to the lack of ground truth labels. This has raised
significant concerns with trustworthiness of DNNs, especially in
safety-critical applications. In this paper, we address trustworthiness of DNNs
by using post-hoc processing to monitor the true inference accuracy on a user's
dataset. Concretely, we propose a neural network-based accuracy monitor model,
which only takes the deployed DNN's softmax probability output as its input and
directly predicts if the DNN's prediction result is correct or not, thus
leading to an estimate of the true inference accuracy. The accuracy monitor
model can be pre-trained on a dataset relevant to the target application of
interest, and only needs to actively label a small portion (1% in our
experiments) of the user's dataset for model transfer. For estimation
robustness, we further employ an ensemble of monitor models based on the
Monte-Carlo dropout method. We evaluate our approach on different deployed DNN
models for image classification and traffic sign detection over multiple
datasets (including adversarial samples). The result shows that our accuracy
monitor model provides a close-to-true accuracy estimation and outperforms the
existing baseline methods.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - UPNet: Uncertainty-based Picking Deep Learning Network for Robust First Break Picking [6.380128763476294]
First break (FB) picking is a crucial aspect in the determination of subsurface velocity models.
Deep neural networks (DNNs) have been proposed to accelerate this processing.
We introduce uncertainty quantification into the FB picking task and propose a novel uncertainty-based deep learning network called UPNet.
arXiv Detail & Related papers (2023-05-23T08:13:09Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Online Black-Box Confidence Estimation of Deep Neural Networks [0.0]
We introduce the neighborhood confidence (NHC) which estimates the confidence of an arbitrary DNN for classification.
The metric can be used for black-box systems since only the top-1 class output is required and does not need access to the gradients.
Evaluation on different data distributions, including small in-domain distribution shifts, out-of-domain data or adversarial attacks, shows that the NHC performs better or on par with a comparable method for online white-box confidence estimation.
arXiv Detail & Related papers (2023-02-27T08:30:46Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.