A Survey on Epistemic (Model) Uncertainty in Supervised Learning: Recent
Advances and Applications
- URL: http://arxiv.org/abs/2111.01968v2
- Date: Thu, 4 Nov 2021 01:46:47 GMT
- Title: A Survey on Epistemic (Model) Uncertainty in Supervised Learning: Recent
Advances and Applications
- Authors: Xinlei Zhou and Han Liu and Farhad Pourpanah and Tieyong Zeng and
Xizhao Wang
- Abstract summary: Quantifying the uncertainty of supervised learning models plays an important role in making more reliable predictions.
Epistemic uncertainty, which usually is due to insufficient knowledge about the model, can be reduced by collecting more data.
- Score: 18.731827159755014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantifying the uncertainty of supervised learning models plays an important
role in making more reliable predictions. Epistemic uncertainty, which usually
is due to insufficient knowledge about the model, can be reduced by collecting
more data or refining the learning models. Over the last few years, scholars
have proposed many epistemic uncertainty handling techniques which can be
roughly grouped into two categories, i.e., Bayesian and ensemble. This paper
provides a comprehensive review of epistemic uncertainty learning techniques in
supervised learning over the last five years. As such, we, first, decompose the
epistemic uncertainty into bias and variance terms. Then, a hierarchical
categorization of epistemic uncertainty learning techniques along with their
representative models is introduced. In addition, several applications such as
computer vision (CV) and natural language processing (NLP) are presented,
followed by a discussion on research gaps and possible future research
directions.
Related papers
- From Uncertainty to Clarity: Uncertainty-Guided Class-Incremental Learning for Limited Biomedical Samples via Semantic Expansion [0.0]
We propose a class-incremental learning method under limited samples in the biomedical field.
Our method achieves optimal performance, surpassing state-of-the-art methods by as much as 53.54% in accuracy.
arXiv Detail & Related papers (2024-09-12T05:22:45Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey [20.373311465258393]
This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
arXiv Detail & Related papers (2023-07-14T04:50:04Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - The worst of both worlds: A comparative analysis of errors in learning
from data in psychology and machine learning [17.336655978572583]
Recent concerns that machine learning (ML) may be facing a misdiagnosis and replication crisis suggest that some published claims in ML research cannot be taken at face value.
A deeper understanding of what concerns in research in supervised ML have in common with the replication crisis in experimental science can put the new concerns in perspective.
arXiv Detail & Related papers (2022-03-12T18:26:24Z) - On robustness of generative representations against catastrophic
forgetting [17.467589890017123]
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.
In this work, we aim at answering this question by posing and validating a set of research hypotheses related to the specificity of representations built internally by neural models.
We observe that representations learned by discriminative models are more prone to catastrophic forgetting than their generative counterparts, which sheds new light on the advantages of developing generative models for continual learning.
arXiv Detail & Related papers (2021-09-04T11:33:24Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - An Optimal Control Approach to Learning in SIDARTHE Epidemic model [67.22168759751541]
We propose a general approach for learning time-variant parameters of dynamic compartmental models from epidemic data.
We forecast the epidemic evolution in Italy and France.
arXiv Detail & Related papers (2020-10-28T10:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.