A Framework for Uncertainty Quantification Based on Nearest Neighbors Across Layers
- URL: http://arxiv.org/abs/2506.19895v1
- Date: Tue, 24 Jun 2025 11:10:41 GMT
- Title: A Framework for Uncertainty Quantification Based on Nearest Neighbors Across Layers
- Authors: Miguel N. Font, José L. Jorro-Aragoneses, Carlos M. Alaíz,
- Abstract summary: Neural Networks have high accuracy in solving problems where it is difficult to detect patterns or create a logical model.<n>One strategy to detect and mitigate these errors is the measurement of the uncertainty over neural network decisions.<n>We present a novel post-hoc framework for measuring the uncertainty of a decision based on retrieved training cases.
- Score: 0.24578723416255746
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural Networks have high accuracy in solving problems where it is difficult to detect patterns or create a logical model. However, these algorithms sometimes return wrong solutions, which become problematic in high-risk domains like medical diagnosis or autonomous driving. One strategy to detect and mitigate these errors is the measurement of the uncertainty over neural network decisions. In this paper, we present a novel post-hoc framework for measuring the uncertainty of a decision based on retrieved training cases that have a similar activation vector to the query for each layer. Based on these retrieved cases, we propose two new metrics: Decision Change and Layer Uncertainty, which capture changes in nearest-neighbor class distributions across layers. We evaluated our approach in a classification model for two datasets: CIFAR-10 and MNIST. The results show that these metrics enhance uncertainty estimation, especially in challenging classification tasks, outperforming softmax-based confidence.
Related papers
- Learning Deterministic Surrogates for Robust Convex QCQPs [0.0]
We propose a double implicit layer model for training prediction models with respect to robust decision loss.
The first layer solves a deterministic version of the problem, the second layer evaluates the worst case realisation for an uncertainty set.
This enables us to learn model parameterisations that lead to robust decisions while only solving a simpler deterministic problem at test time.
arXiv Detail & Related papers (2023-12-19T16:56:13Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - R(Det)^2: Randomized Decision Routing for Object Detection [64.48369663018376]
We propose a novel approach to combine decision trees and deep neural networks in an end-to-end learning manner for object detection.
To facilitate effective learning, we propose randomized decision routing with node selective and associative losses.
We name this approach as the randomized decision routing for object detection, abbreviated as R(Det)$2$.
arXiv Detail & Related papers (2022-04-02T07:54:58Z) - A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty
Estimates for AI Models [0.0]
Uncertainty wrappers use a decision tree approach to cluster input quality related uncertainties, assigning inputs strictly to distinct uncertainty clusters.
Our objective is to replace this with an approach that mitigates hard decision boundaries while preserving interpretability, runtime complexity, and prediction performance.
arXiv Detail & Related papers (2022-01-10T10:29:12Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Fine-grained Uncertainty Modeling in Neural Networks [0.0]
We present a novel method to detect out-of-distribution points in a Neural Network.
Our method corrects overconfident NN decisions, detects outlier points and learns to say I don't know'' when uncertain about a critical point between the top two predictions.
As a positive side effect, our method helps to prevent adversarial attacks without requiring any additional training.
arXiv Detail & Related papers (2020-02-11T05:06:25Z) - On Last-Layer Algorithms for Classification: Decoupling Representation
from Uncertainty Estimation [27.077741143188867]
We propose a family of algorithms which split the classification task into two stages: representation learning and uncertainty estimation.
We evaluate their performance in terms of emphselective classification (risk-coverage), and their ability to detect out-of-distribution samples.
arXiv Detail & Related papers (2020-01-22T15:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.