A Survey on Epistemic (Model) Uncertainty in Supervised Learning: Recent
Advances and Applications
- URL: http://arxiv.org/abs/2111.01968v2
- Date: Thu, 4 Nov 2021 01:46:47 GMT
- Title: A Survey on Epistemic (Model) Uncertainty in Supervised Learning: Recent
Advances and Applications
- Authors: Xinlei Zhou and Han Liu and Farhad Pourpanah and Tieyong Zeng and
Xizhao Wang
- Abstract summary: Quantifying the uncertainty of supervised learning models plays an important role in making more reliable predictions.
Epistemic uncertainty, which usually is due to insufficient knowledge about the model, can be reduced by collecting more data.
- Score: 18.731827159755014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantifying the uncertainty of supervised learning models plays an important
role in making more reliable predictions. Epistemic uncertainty, which usually
is due to insufficient knowledge about the model, can be reduced by collecting
more data or refining the learning models. Over the last few years, scholars
have proposed many epistemic uncertainty handling techniques which can be
roughly grouped into two categories, i.e., Bayesian and ensemble. This paper
provides a comprehensive review of epistemic uncertainty learning techniques in
supervised learning over the last five years. As such, we, first, decompose the
epistemic uncertainty into bias and variance terms. Then, a hierarchical
categorization of epistemic uncertainty learning techniques along with their
representative models is introduced. In addition, several applications such as
computer vision (CV) and natural language processing (NLP) are presented,
followed by a discussion on research gaps and possible future research
directions.
Related papers
- Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey [20.373311465258393]
This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
arXiv Detail & Related papers (2023-07-14T04:50:04Z) - Introduction and Exemplars of Uncertainty Decomposition [3.0349501539299686]
Uncertainty plays a crucial role in the machine learning field.
This report aims to demystify the notion of uncertainty decomposition through an introduction to two types of uncertainty and several decomposition exemplars.
arXiv Detail & Related papers (2022-11-17T17:14:34Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - The worst of both worlds: A comparative analysis of errors in learning
from data in psychology and machine learning [17.336655978572583]
Recent concerns that machine learning (ML) may be facing a misdiagnosis and replication crisis suggest that some published claims in ML research cannot be taken at face value.
A deeper understanding of what concerns in research in supervised ML have in common with the replication crisis in experimental science can put the new concerns in perspective.
arXiv Detail & Related papers (2022-03-12T18:26:24Z) - On robustness of generative representations against catastrophic
forgetting [17.467589890017123]
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.
In this work, we aim at answering this question by posing and validating a set of research hypotheses related to the specificity of representations built internally by neural models.
We observe that representations learned by discriminative models are more prone to catastrophic forgetting than their generative counterparts, which sheds new light on the advantages of developing generative models for continual learning.
arXiv Detail & Related papers (2021-09-04T11:33:24Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - An Optimal Control Approach to Learning in SIDARTHE Epidemic model [67.22168759751541]
We propose a general approach for learning time-variant parameters of dynamic compartmental models from epidemic data.
We forecast the epidemic evolution in Italy and France.
arXiv Detail & Related papers (2020-10-28T10:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.