AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification
- URL: http://arxiv.org/abs/2110.13511v1
- Date: Tue, 26 Oct 2021 09:12:23 GMT
- Title: AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification
- Authors: Romain Egele, Romit Maulik, Krishnan Raghavan, Prasanna Balaprakash,
Bethany Lusch
- Abstract summary: We propose AutoDEUQ, an automated approach for generating an ensemble of deep neural networks.
We show that AutoDEUQ outperforms probabilistic backpropagation, Monte Carlo dropout, deep ensemble, distribution-free ensembles, and hyper ensemble methods on a number of regression benchmarks.
- Score: 0.9449650062296824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are powerful predictors for a variety of tasks. However,
they do not capture uncertainty directly. Using neural network ensembles to
quantify uncertainty is competitive with approaches based on Bayesian neural
networks while benefiting from better computational scalability. However,
building ensembles of neural networks is a challenging task because, in
addition to choosing the right neural architecture or hyperparameters for each
member of the ensemble, there is an added cost of training each model. We
propose AutoDEUQ, an automated approach for generating an ensemble of deep
neural networks. Our approach leverages joint neural architecture and
hyperparameter search to generate ensembles. We use the law of total variance
to decompose the predictive variance of deep ensembles into aleatoric (data)
and epistemic (model) uncertainties. We show that AutoDEUQ outperforms
probabilistic backpropagation, Monte Carlo dropout, deep ensemble,
distribution-free ensembles, and hyper ensemble methods on a number of
regression benchmarks.
Related papers
- LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Quantifying uncertainty for deep learning based forecasting and
flow-reconstruction using neural architecture search ensembles [0.8258451067861933]
We present an automated approach to deep neural network (DNN) discovery and demonstrate how this may also be utilized for ensemble-based uncertainty quantification.
We highlight how the proposed method not only discovers high-performing neural network ensembles for our tasks, but also quantifies uncertainty seamlessly.
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
arXiv Detail & Related papers (2023-02-20T03:57:06Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Epistemic Neural Networks [25.762699400341972]
In principle, ensemble-based approaches produce effective joint predictions.
But the computational costs of training large ensembles can become prohibitive.
We introduce the epinet: an architecture that can supplement any conventional neural network.
arXiv Detail & Related papers (2021-07-19T14:37:57Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Identifying Critical Neurons in ANN Architectures using Mixed Integer
Programming [11.712073757744452]
We introduce a mixed integer program (MIP) for assigning importance scores to each neuron in deep neural network architectures.
We drive the solver to minimize the number of critical neurons (i.e., with high importance score) that need to be kept for maintaining the overall accuracy of the trained neural network.
arXiv Detail & Related papers (2020-02-17T21:32:47Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.