High Accuracy Uncertainty-Aware Interatomic Force Modeling with
Equivariant Bayesian Neural Networks
- URL: http://arxiv.org/abs/2304.03694v1
- Date: Wed, 5 Apr 2023 10:39:38 GMT
- Title: High Accuracy Uncertainty-Aware Interatomic Force Modeling with
Equivariant Bayesian Neural Networks
- Authors: Tim Rensmeyer, Benjamin Craig, Denis Kramer, Oliver Niggemann
- Abstract summary: We introduce a new Monte Carlo Markov chain sampling algorithm for learning interatomic forces.
In addition, we introduce a new neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty.
- Score: 3.028098724882708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Even though Bayesian neural networks offer a promising framework for modeling
uncertainty, active learning and incorporating prior physical knowledge, few
applications of them can be found in the context of interatomic force modeling.
One of the main challenges in their application to learning interatomic forces
is the lack of suitable Monte Carlo Markov chain sampling algorithms for the
posterior density, as the commonly used algorithms do not converge in a
practical amount of time for many of the state-of-the-art architectures. As a
response to this challenge, we introduce a new Monte Carlo Markov chain
sampling algorithm in this paper which can circumvent the problems of the
existing sampling methods. In addition, we introduce a new stochastic neural
network model based on the NequIP architecture and demonstrate that, when
combined with our novel sampling algorithm, we obtain predictions with
state-of-the-art accuracy as well as a good measure of uncertainty.
Related papers
- Uncertainty Quantification of Graph Convolution Neural Network Models of
Evolving Processes [0.8749675983608172]
We show that Stein variational inference is a viable alternative to Monte Carlo methods for complex neural network models.
For our exemplars, Stein variational interference gave similar uncertainty profiles through time compared to Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2024-02-17T03:19:23Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Single-model uncertainty quantification in neural network potentials
does not consistently outperform model ensembles [0.7499722271664145]
Neural networks (NNs) often assign high confidence to their predictions, even for points far out-of-distribution.
Uncertainty quantification (UQ) is a challenge when they are employed to model interatomic potentials in materials systems.
Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials.
arXiv Detail & Related papers (2023-05-02T19:41:17Z) - Quantifying uncertainty for deep learning based forecasting and
flow-reconstruction using neural architecture search ensembles [0.8258451067861933]
We present an automated approach to deep neural network (DNN) discovery and demonstrate how this may also be utilized for ensemble-based uncertainty quantification.
We highlight how the proposed method not only discovers high-performing neural network ensembles for our tasks, but also quantifies uncertainty seamlessly.
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
arXiv Detail & Related papers (2023-02-20T03:57:06Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Revisit Geophysical Imaging in A New View of Physics-informed Generative
Adversarial Learning [2.12121796606941]
Full waveform inversion produces high-resolution subsurface models.
FWI with least-squares function suffers from many drawbacks such as the local-minima problem.
Recent works relying on partial differential equations and neural networks show promising performance for two-dimensional FWI.
We propose an unsupervised learning paradigm that integrates wave equation with a discriminate network to accurately estimate the physically consistent models.
arXiv Detail & Related papers (2021-09-23T15:54:40Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Amortized Bayesian Inference for Models of Cognition [0.1529342790344802]
Recent advances in simulation-based inference using specialized neural network architectures circumvent many previous problems of approximate Bayesian computation.
We provide a general introduction to amortized Bayesian parameter estimation and model comparison.
arXiv Detail & Related papers (2020-05-08T08:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.