A Review of Uncertainty Quantification in Deep Learning: Techniques,
Applications and Challenges
- URL: http://arxiv.org/abs/2011.06225v4
- Date: Wed, 6 Jan 2021 01:58:12 GMT
- Title: A Review of Uncertainty Quantification in Deep Learning: Techniques,
Applications and Challenges
- Authors: Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li
Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U
Rajendra Acharya, Vladimir Makarenkov, Saeid Nahavandi
- Abstract summary: Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes.
Bizarre approximation and ensemble learning techniques are two most widely-used UQ methods in the literature.
This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning.
- Score: 76.20963684020145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification (UQ) plays a pivotal role in reduction of
uncertainties during both optimization and decision making processes. It can be
applied to solve a variety of real-world applications in science and
engineering. Bayesian approximation and ensemble learning techniques are two
most widely-used UQ methods in the literature. In this regard, researchers have
proposed different UQ methods and examined their performance in a variety of
applications such as computer vision (e.g., self-driving cars and object
detection), image processing (e.g., image restoration), medical image analysis
(e.g., medical image classification and segmentation), natural language
processing (e.g., text classification, social media texts and recidivism
risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ
methods used in deep learning. Moreover, we also investigate the application of
these methods in reinforcement learning (RL). Then, we outline a few important
applications of UQ methods. Finally, we briefly highlight the fundamental
research challenges faced by UQ methods and discuss the future research
directions in this field.
Related papers
- Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [85.51252685938564]
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML)
As with other ML models, large language models (LLMs) are prone to make incorrect predictions, hallucinate'' by fabricating claims, or simply generate low-quality output for a given input.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - A Comprehensive Survey on Underwater Image Enhancement Based on Deep Learning [51.7818820745221]
Underwater image enhancement (UIE) presents a significant challenge within computer vision research.
Despite the development of numerous UIE algorithms, a thorough and systematic review is still absent.
arXiv Detail & Related papers (2024-05-30T04:46:40Z) - Uncertainty Quantification in Machine Learning for Engineering Design
and Health Prognostics: A Tutorial [12.570694576213244]
Uncertainty quantification (UQ) functions as an essential layer of safety assurance that could lead to more principled decision making.
This tutorial provides a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks.
We discuss the increasingly important role of UQ of ML models in solving challenging problems in engineering design and health prognostics.
arXiv Detail & Related papers (2023-05-07T03:12:03Z) - A Survey on Uncertainty Quantification Methods for Deep Learning [7.102893202197349]
Uncertainty quantification (UQ) aims to estimate the confidence of DNN predictions beyond prediction accuracy.
This paper presents a systematic taxonomy of UQ methods for DNNs based on the types of uncertainty sources.
We show how our taxonomy of UQ methodologies can potentially help guide the choice of UQ method in different machine learning problems.
arXiv Detail & Related papers (2023-02-26T22:30:08Z) - NeuralUQ: A comprehensive library for uncertainty quantification in
neural differential equations and operators [0.0]
Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest.
We present an open-source Python library, termed NeuralUQ, for employing UQ methods for SciML in a convenient and structured manner.
arXiv Detail & Related papers (2022-08-25T04:28:18Z) - Interpretable Uncertainty Quantification in AI for HEP [2.922388615593672]
Estimating uncertainty is at the core of performing scientific measurements in HEP.
The goal of uncertainty quantification (UQ) is inextricably linked to the question, "how do we physically and statistically interpret these uncertainties?"
For artificial intelligence (AI) applications in HEP, there are several areas where interpretable methods for UQ are essential.
arXiv Detail & Related papers (2022-08-05T17:20:27Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - IQ-Learn: Inverse soft-Q Learning for Imitation [95.06031307730245]
imitation learning from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics.
Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence.
We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function.
arXiv Detail & Related papers (2021-06-23T03:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.