Explorations of the Softmax Space: Knowing When the Neural Network Doesn't Know
- URL: http://arxiv.org/abs/2502.00456v2
- Date: Wed, 30 Apr 2025 17:19:55 GMT
- Title: Explorations of the Softmax Space: Knowing When the Neural Network Doesn't Know
- Authors: Daniel Sikar, Artur d'Avila Garcez, Tillman Weyde,
- Abstract summary: This paper proposes a new approach for measuring confidence in the predictions of any neural network.<n>We identify that a high-accuracy trained network may have certain outputs for which there should be low confidence.<n>We show that a cluster with centroid calculated simply as the mean softmax output for all correct predictions can serve as a suitable proxy in the evaluation of confidence.
- Score: 2.6626950367610394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring the reliability of automated decision-making based on neural networks will be crucial as Artificial Intelligence systems are deployed more widely in critical situations. This paper proposes a new approach for measuring confidence in the predictions of any neural network that relies on the predictions of a softmax layer. We identify that a high-accuracy trained network may have certain outputs for which there should be low confidence. In such cases, decisions should be deferred and it is more appropriate for the network to provide a \textit{not known} answer to a corresponding classification task. Our approach clusters the vectors in the softmax layer to measure distances between cluster centroids and network outputs. We show that a cluster with centroid calculated simply as the mean softmax output for all correct predictions can serve as a suitable proxy in the evaluation of confidence. Defining a distance threshold for a class as the smallest distance from an incorrect prediction to the given class centroid offers a simple approach to adding \textit{not known} answers to any network classification falling outside of the threshold. We evaluate the approach on the MNIST and CIFAR-10 datasets using a Convolutional Neural Network and a Vision Transformer, respectively. The results show that our approach is consistent across datasets and network models, and indicate that the proposed distance metric can offer an efficient way of determining when automated predictions are acceptable and when they should be deferred to human operators.
Related papers
- When to Accept Automated Predictions and When to Defer to Human Judgment? [1.9922905420195367]
We analyze how the outputs of a trained neural network change using clustering to measure distances between outputs and class centroids.
We propose this distance as a metric to evaluate the confidence of predictions under distribution shifts.
arXiv Detail & Related papers (2024-07-10T16:45:52Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - UPNet: Uncertainty-based Picking Deep Learning Network for Robust First Break Picking [6.380128763476294]
First break (FB) picking is a crucial aspect in the determination of subsurface velocity models.
Deep neural networks (DNNs) have been proposed to accelerate this processing.
We introduce uncertainty quantification into the FB picking task and propose a novel uncertainty-based deep learning network called UPNet.
arXiv Detail & Related papers (2023-05-23T08:13:09Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks [0.29005223064604074]
We propose a method to convert a standard neural network into a Bayesian neural network.
We estimate the variability of predictions by sampling different networks similar to the original one at each forward pass.
We test our model in a large cohort of brain images from Alzheimer's Disease patients.
arXiv Detail & Related papers (2022-09-27T01:07:19Z) - Robust-by-Design Classification via Unitary-Gradient Neural Networks [66.17379946402859]
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks.
Knowing the minimal adversarial perturbation of any input x, or, equivalently, the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions.
A novel network architecture named Unitary-Gradient Neural Network is presented.
Experimental results show that the proposed architecture approximates a signed distance, hence allowing an online certifiable classification of x at the cost of a single inference.
arXiv Detail & Related papers (2022-09-09T13:34:51Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Uncertainty Estimation and Sample Selection for Crowd Counting [87.29137075538213]
We present a method for image-based crowd counting that can predict a crowd density map together with the uncertainty values pertaining to the predicted density map.
A key advantage of our method over existing crowd counting methods is its ability to quantify the uncertainty of its predictions.
We show that our sample selection strategy drastically reduces the amount of labeled data needed to adapt a counting network trained on a source domain to the target domain.
arXiv Detail & Related papers (2020-09-30T03:40:07Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Fine-grained Uncertainty Modeling in Neural Networks [0.0]
We present a novel method to detect out-of-distribution points in a Neural Network.
Our method corrects overconfident NN decisions, detects outlier points and learns to say I don't know'' when uncertain about a critical point between the top two predictions.
As a positive side effect, our method helps to prevent adversarial attacks without requiring any additional training.
arXiv Detail & Related papers (2020-02-11T05:06:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.