CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
- URL: http://arxiv.org/abs/2401.05043v3
- Date: Sat, 25 Jan 2025 09:18:33 GMT
- Title: CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
- Authors: Kaizheng Wang, Keivan Shariatmadar, Shireen Kudukkil Manchingal, Fabio Cuzzolin, David Moens, Hans Hallez,
- Abstract summary: This work presents a novel approach, termed Credal-Set Interval Neural Networks (CreINNs) for classification.
CreINNs are designed to predict an upper and a lower probability bound for each class, rather than a single probability value.
Experiments on standard multiclass and binary classification tasks demonstrate that the proposed CreINNs can achieve superior or comparable quality of uncertainty estimation.
- Score: 4.904199965391026
- License:
- Abstract: Effective uncertainty estimation is becoming increasingly attractive for enhancing the reliability of neural networks. This work presents a novel approach, termed Credal-Set Interval Neural Networks (CreINNs), for classification. CreINNs retain the fundamental structure of traditional Interval Neural Networks, capturing weight uncertainty through deterministic intervals. CreINNs are designed to predict an upper and a lower probability bound for each class, rather than a single probability value. The probability intervals can define a credal set, facilitating estimating different types of uncertainties associated with predictions. Experiments on standard multiclass and binary classification tasks demonstrate that the proposed CreINNs can achieve superior or comparable quality of uncertainty estimation compared to variational Bayesian Neural Networks (BNNs) and Deep Ensembles. Furthermore, CreINNs significantly reduce the computational complexity of variational BNNs during inference. Moreover, the effective uncertainty quantification of CreINNs is also verified when the input data are intervals.
Related papers
- Enhancing Trustworthiness of Graph Neural Networks with Rank-Based Conformal Training [17.120502204791407]
Conformal Prediction can produce statistically guaranteed uncertainty estimates.
We propose a Rank-based CP during training framework to GNNs (RCP-GNN) for reliable uncertainty estimates.
arXiv Detail & Related papers (2025-01-06T05:19:24Z) - Fast and reliable uncertainty quantification with neural network ensembles for industrial image classification [1.104960878651584]
Image classification with neural networks (NNs) is widely used in industrial processes.
NNs tend to make confident yet incorrect predictions when confronted with out-of-distribution (OOD) data.
Deep ensembles, composed of multiple independent NNs, have been shown to perform strongly but are computationally expensive.
This study investigates the predictive and uncertainty performance of efficient NN ensembles in the context of image classification for industrial processes.
arXiv Detail & Related papers (2024-03-15T10:38:48Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - Improving Uncertainty Quantification of Variance Networks by
Tree-Structured Learning [10.566352737844369]
We propose a novel tree-structured local neural network model that partitions the feature space into multiple regions based on uncertainty heterogeneity.
The proposed Uncertainty-Splitting Neural Regression Tree (USNRT) employs novel splitting criteria.
USNRT or its ensemble shows superior performance compared to some recent popular methods for quantifying uncertainty with variances.
arXiv Detail & Related papers (2022-12-24T05:25:09Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Multidimensional Uncertainty-Aware Evidential Neural Networks [21.716045815385268]
We propose a novel uncertainty-aware evidential NN called WGAN-ENN (WENN) for solving an out-of-versa (OOD) detection problem.
We took a hybrid approach that combines Wasserstein Generative Adrial Network (WGAN) with ENNs to jointly train a model with prior knowledge of a certain class.
We demonstrated that the estimation of uncertainty by WENN can significantly help distinguish OOD samples from boundary samples.
arXiv Detail & Related papers (2020-12-26T04:28:56Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.