SLURP: Side Learning Uncertainty for Regression Problems
- URL: http://arxiv.org/abs/2110.11182v1
- Date: Thu, 21 Oct 2021 14:50:42 GMT
- Title: SLURP: Side Learning Uncertainty for Regression Problems
- Authors: Xuanlong Yu, Gianni Franchi, Emanuel Aldea
- Abstract summary: We propose SLURP, a generic approach for regression uncertainty estimation via a side learner.
We test SLURP on two critical regression tasks in computer vision: monocular depth and optical flow estimation.
- Score: 3.5321916087562304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has become critical for deep learning algorithms to quantify their output
uncertainties to satisfy reliability constraints and provide accurate results.
Uncertainty estimation for regression has received less attention than
classification due to the more straightforward standardized output of the
latter class of tasks and their high importance. However, regression problems
are encountered in a wide range of applications in computer vision. We propose
SLURP, a generic approach for regression uncertainty estimation via a side
learner that exploits the output and the intermediate representations generated
by the main task model. We test SLURP on two critical regression tasks in
computer vision: monocular depth and optical flow estimation. In addition, we
conduct exhaustive benchmarks comprising transfer to different datasets and the
addition of aleatoric noise. The results show that our proposal is generic and
readily applicable to various regression problems and has a low computational
cost with respect to existing solutions.
Related papers
- Embedding generalization within the learning dynamics: An approach based-on sample path large deviation theory [0.0]
We consider an empirical risk perturbation based learning problem that exploits methods from continuous-time perspective.
We provide an estimate in the small noise limit based on the Freidlin-Wentzell theory of large deviations.
We also present a computational algorithm that solves the corresponding variational problem leading to an optimal point estimates.
arXiv Detail & Related papers (2024-08-04T23:31:35Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - A Bayesian Robust Regression Method for Corrupted Data Reconstruction [5.298637115178182]
We develop an effective robust regression method that can resist adaptive adversarial attacks.
First, we propose the novel TRIP (hard Thresholding approach to Robust regression with sImple Prior) algorithm.
We then use the idea of Bayesian reweighting to construct the more robust BRHT (robust Bayesian Reweighting regression via Hard Thresholding) algorithm.
arXiv Detail & Related papers (2022-12-24T17:25:53Z) - Mutual Information Learned Regressor: an Information-theoretic Viewpoint
of Training Regression Systems [10.314518385506007]
An existing common practice for solving regression problems is the mean square error (MSE) minimization approach.
Recently, Yi et al., proposed a mutual information based supervised learning framework where they introduced a label entropy regularization.
In this paper, we investigate the regression under the mutual information based supervised learning framework.
arXiv Detail & Related papers (2022-11-23T03:43:22Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Residual Gaussian Process: A Tractable Nonparametric Bayesian Emulator
for Multi-fidelity Simulations [6.6903363553912305]
A novel additive structure is introduced in which the highest fidelity solution is written as a sum of the lowest fidelity solution and residuals.
The resulting model is equipped with a closed-form solution for the predictive posterior.
It is shown how active learning can be used to enhance the model, especially with a limited computational budget.
arXiv Detail & Related papers (2021-04-08T12:57:46Z) - Globally-convergent Iteratively Reweighted Least Squares for Robust
Regression Problems [15.823258699608994]
We provide the first global model recovery results for the IRLS (iteratively reweighted least squares) for robust regression problems.
We propose augmentations to the basic IRLS routine that not only offer guaranteed global recovery, but in practice also outperform state-of-the-art algorithms for robust regression.
arXiv Detail & Related papers (2020-06-25T07:16:13Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.