Epistemic Deep Learning
- URL: http://arxiv.org/abs/2206.07609v1
- Date: Wed, 15 Jun 2022 15:39:52 GMT
- Title: Epistemic Deep Learning
- Authors: Shireen Kudukkil Manchingal and Fabio Cuzzolin
- Abstract summary: We introduce a concept called epistemic deep learning based on the random-set interpretation of belief functions.
We propose a novel random-set convolutional neural network for classification that produces scores for sets of classes by learning set-valued ground truth representations.
- Score: 6.144873990390373
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The belief function approach to uncertainty quantification as proposed in the
Demspter-Shafer theory of evidence is established upon the general mathematical
models for set-valued observations, called random sets. Set-valued predictions
are the most natural representations of uncertainty in machine learning. In
this paper, we introduce a concept called epistemic deep learning based on the
random-set interpretation of belief functions to model epistemic learning in
deep neural networks. We propose a novel random-set convolutional neural
network for classification that produces scores for sets of classes by learning
set-valued ground truth representations. We evaluate different formulations of
entropy and distance measures for belief functions as viable loss functions for
these random-set networks. We also discuss methods for evaluating the quality
of epistemic predictions and the performance of epistemic random-set neural
networks. We demonstrate through experiments that the epistemic approach
produces better performance results when compared to traditional approaches of
estimating uncertainty.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications [0.32302664881848275]
Evidential deep learning is a technique that extends parametric deep learning to higher-order distributions.
This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles.
We show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty.
arXiv Detail & Related papers (2023-09-22T23:04:51Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Convolutional Motif Kernel Networks [1.104960878651584]
We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
Our proposed method can be utilized on DNA and protein sequences.
arXiv Detail & Related papers (2021-11-03T15:06:09Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - BDNNSurv: Bayesian deep neural networks for survival analysis using
pseudo values [7.707091943385522]
We propose a Bayesian hierarchical deep neural networks model for modeling and prediction of survival data.
Compared with previously studied methods, the new proposal can provide point estimate of survival probability.
The Python code implementing the proposed approach was provided.
arXiv Detail & Related papers (2021-01-07T20:18:43Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.