Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties
- URL: http://arxiv.org/abs/2012.12687v1
- Date: Wed, 23 Dec 2020 14:17:33 GMT
- Title: Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties
- Authors: Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja
Fischer, Stefan Wrobel
- Abstract summary: Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
- Score: 7.766663822644739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantification of uncertainty is one of the most promising approaches to
establish safe machine learning. Despite its importance, it is far from being
generally solved, especially for neural networks. One of the most commonly used
approaches so far is Monte Carlo dropout, which is computationally cheap and
easy to apply in practice. However, it can underestimate the uncertainty. We
propose a new objective, referred to as second-moment loss (SML), to address
this issue. While the full network is encouraged to model the mean, the dropout
networks are explicitly used to optimize the model variance. We analyze the
performance of the new objective on various toy and UCI regression datasets.
Comparing to the state-of-the-art of deep ensembles, SML leads to comparable
prediction accuracies and uncertainty estimates while only requiring a single
model. Under distribution shift, we observe moderate improvements. From a
safety perspective also the study of worst-case uncertainties is crucial. In
this regard we improve considerably. Finally, we show that SML can be
successfully applied to SqueezeDet, a modern object detection network. We
improve on its uncertainty-related scores while not deteriorating regression
quality. As a side result, we introduce an intuitive Wasserstein distance-based
uncertainty measure that is non-saturating and thus allows to resolve quality
differences between any two uncertainty estimates.
Related papers
- Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Optimal Training of Mean Variance Estimation Neural Networks [1.4610038284393165]
This paper focusses on the optimal implementation of a Mean Variance Estimation network (MVE network) (Nix and Weigend, 1994)
An MVE network assumes that the data is produced from a normal distribution with a mean function and variance function.
We introduce a novel improvement of the MVE network: separate regularization of the mean and the variance estimate.
arXiv Detail & Related papers (2023-02-17T13:44:47Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - A Novel Regression Loss for Non-Parametric Uncertainty Optimization [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2021-01-07T19:12:06Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - MetaDetect: Uncertainty Quantification and Prediction Quality Estimates
for Object Detection [6.230751621285322]
In object detection with deep neural networks, the box-wise objectness score tends to be overconfident.
We present a post processing method that for any given neural network provides predictive uncertainty estimates and quality estimates.
arXiv Detail & Related papers (2020-10-04T21:49:23Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.