Improving Uncertainty Quantification of Variance Networks by
Tree-Structured Learning
- URL: http://arxiv.org/abs/2212.12658v2
- Date: Thu, 20 Jul 2023 03:00:05 GMT
- Title: Improving Uncertainty Quantification of Variance Networks by
Tree-Structured Learning
- Authors: Wenxuan Ma, Xing Yan, and Kun Zhang
- Abstract summary: We propose a novel tree-structured local neural network model that partitions the feature space into multiple regions based on uncertainty heterogeneity.
The proposed Uncertainty-Splitting Neural Regression Tree (USNRT) employs novel splitting criteria.
USNRT or its ensemble shows superior performance compared to some recent popular methods for quantifying uncertainty with variances.
- Score: 10.566352737844369
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To improve the uncertainty quantification of variance networks, we propose a
novel tree-structured local neural network model that partitions the feature
space into multiple regions based on uncertainty heterogeneity. A tree is built
upon giving the training data, whose leaf nodes represent different regions
where region-specific neural networks are trained to predict both the mean and
the variance for quantifying uncertainty. The proposed Uncertainty-Splitting
Neural Regression Tree (USNRT) employs novel splitting criteria. At each node,
a neural network is trained on the full data first, and a statistical test for
the residuals is conducted to find the best split, corresponding to the two
sub-regions with the most significant uncertainty heterogeneity between them.
USNRT is computationally friendly because very few leaf nodes are sufficient
and pruning is unnecessary. Furthermore, an ensemble version can be easily
constructed to estimate the total uncertainty including the aleatory and
epistemic. On extensive UCI datasets, USNRT or its ensemble shows superior
performance compared to some recent popular methods for quantifying uncertainty
with variances. Through comprehensive visualization and analysis, we uncover
how USNRT works and show its merits, revealing that uncertainty heterogeneity
does exist in many datasets and can be learned by USNRT.
Related papers
- Reliable uncertainty with cheaper neural network ensembles: a case study in industrial parts classification [1.104960878651584]
In operations research (OR), predictive models often encounter out-of-distribution (OOD) scenarios.
Deep ensembles, composed of multiple independent NNs, have emerged as a promising approach.
This study is the first to provide a comprehensive comparison of a single NN, a deep ensemble, and the three efficient NN ensembles.
arXiv Detail & Related papers (2024-03-15T10:38:48Z) - CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation
in Classification Tasks [5.19656787424626]
Uncertainty estimation is increasingly attractive for improving the reliability of neural networks.
We present novel credal-set interval neural networks (CreINNs) designed for classification tasks.
arXiv Detail & Related papers (2024-01-10T10:04:49Z) - Be Bayesian by Attachments to Catch More Uncertainty [27.047781689062944]
We propose a new Bayesian Neural Network with an Attached structure (ABNN) to catch more uncertainty from out-of-distribution (OOD) data.
ABNN is composed of an expectation module and several distribution modules.
arXiv Detail & Related papers (2023-10-19T07:28:39Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.