Improving evidential deep learning via multi-task learning
- URL: http://arxiv.org/abs/2112.09368v1
- Date: Fri, 17 Dec 2021 07:56:20 GMT
- Title: Improving evidential deep learning via multi-task learning
- Authors: Dongpin Oh and Bonggun Shin
- Abstract summary: The objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation.
A multi-task learning framework, referred to as MT-ENet, is proposed to accomplish this aim.
The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks.
- Score: 1.8275108630751844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Evidential regression network (ENet) estimates a continuous target and
its predictive uncertainty without costly Bayesian model averaging. However, it
is possible that the target is inaccurately predicted due to the gradient
shrinkage problem of the original loss function of the ENet, the negative log
marginal likelihood (NLL) loss. In this paper, the objective is to improve the
prediction accuracy of the ENet while maintaining its efficient uncertainty
estimation by resolving the gradient shrinkage problem. A multi-task learning
(MTL) framework, referred to as MT-ENet, is proposed to accomplish this aim. In
the MTL, we define the Lipschitz modified mean squared error (MSE) loss
function as another loss and add it to the existing NLL loss. The Lipschitz
modified MSE loss is designed to mitigate the gradient conflict with the NLL
loss by dynamically adjusting its Lipschitz constant. By doing so, the
Lipschitz MSE loss does not disturb the uncertainty estimation of the NLL loss.
The MT-ENet enhances the predictive accuracy of the ENet without losing
uncertainty estimation capability on the synthetic dataset and real-world
benchmarks, including drug-target affinity (DTA) regression. Furthermore, the
MT-ENet shows remarkable calibration and out-of-distribution detection
capability on the DTA benchmarks.
Related papers
- LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Adaptive Learning for Multi-view Stereo Reconstruction [6.635583283522551]
We first analyze existing loss functions' properties for deep depth based MVS approaches.
We then propose a novel loss function, named adaptive Wasserstein loss, which is able to narrow down the difference between the true and predicted probability distributions of depth.
Experiments on different benchmarks, including DTU, Tanks and Temples and BlendedMVS, show that the proposed method with the adaptive Wasserstein loss and the offset module achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-08T04:13:35Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Leveraging Heteroscedastic Uncertainty in Learning Complex Spectral
Mapping for Single-channel Speech Enhancement [20.823177372464414]
Most speech enhancement (SE) models learn a point estimate, and do not make use of uncertainty estimation in the learning process.
We show that modeling heteroscedastic uncertainty by minimizing a multivariate Gaussian negative log-likelihood (NLL) improves SE performance at no extra cost.
arXiv Detail & Related papers (2022-11-16T02:29:05Z) - Robust Depth Completion with Uncertainty-Driven Loss Functions [60.9237639890582]
We introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion.
Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
arXiv Detail & Related papers (2021-12-15T05:22:34Z) - Improving MC-Dropout Uncertainty Estimates with Calibration Error-based
Optimization [18.22429945073576]
We propose two new loss functions by combining cross entropy with Expected Error (ECE) and Predictive Entropy (PE)
Our results confirmed the great impact of the new hybrid loss functions for minimising the overlap between the distributions of uncertainty estimates for correct and incorrect predictions without sacrificing the model's overall performance.
arXiv Detail & Related papers (2021-10-07T08:31:23Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - A Novel Regression Loss for Non-Parametric Uncertainty Optimization [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2021-01-07T19:12:06Z) - Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2020-12-23T14:17:33Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.