Improved Uncertainty Estimation of Graph Neural Network Potentials Using Engineered Latent Space Distances
- URL: http://arxiv.org/abs/2407.10844v2
- Date: Mon, 26 Aug 2024 17:31:16 GMT
- Title: Improved Uncertainty Estimation of Graph Neural Network Potentials Using Engineered Latent Space Distances
- Authors: Joseph Musielewicz, Janice Lan, Matt Uyttendaele, John R. Kitchin,
- Abstract summary: We show that uncertainty quantification for relaxed energy calculations is more complex than uncertainty quantification for other kinds of molecular property prediction.
We propose that distribution-free techniques are more useful tools for assessing calibration, recalibrating, and developing uncertainty prediction methods for GNNs performing relaxed energy calculations.
- Score: 1.9286144773392733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have been shown to be astonishingly capable models for molecular property prediction, particularly as surrogates for expensive density functional theory calculations of relaxed energy for novel material discovery. However, one limitation of GNNs in this context is the lack of useful uncertainty prediction methods, as this is critical to the material discovery pipeline. In this work, we show that uncertainty quantification for relaxed energy calculations is more complex than uncertainty quantification for other kinds of molecular property prediction, due to the effect that structure optimizations have on the error distribution. We propose that distribution-free techniques are more useful tools for assessing calibration, recalibrating, and developing uncertainty prediction methods for GNNs performing relaxed energy calculations. We also develop a relaxed energy task for evaluating uncertainty methods for equivariant GNNs, based on distribution-free recalibration and using the Open Catalyst Project dataset. We benchmark a set of popular uncertainty prediction methods on this task, and show that latent distance methods, with our novel improvements, are the most well-calibrated and economical approach for relaxed energy calculations. Finally, we demonstrate that our latent space distance method produces results which align with our expectations on a clustering example, and on specific equation of state and adsorbate coverage examples from outside the training dataset.
Related papers
- Bayesian neural networks for predicting uncertainty in full-field material response [0.0]
This work proposes an ML surrogate framework for stress field prediction and uncertainty quantification.
A modified Bayesian U-net architecture is employed to provide a data-driven image-to-image mapping from initial microstructure to stress field.
It is shown that the proposed methods yield predictions of high accuracy compared to the FEA solution.
arXiv Detail & Related papers (2024-06-21T02:43:25Z) - Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods [0.0]
In this work, we propose to use the NN's predictions in a high-order implicit Runge-Kutta (IRK) method.
The residuals in the implicit system of equations can be related to the NN's prediction error, hence, we can provide an error estimate at several points along a trajectory.
We find that this error estimate highly correlates with the NN's prediction error and that increasing the order of the IRK method improves this estimate.
arXiv Detail & Related papers (2024-01-10T15:18:56Z) - Uncertainty Quantification for Molecular Property Predictions with Graph Neural Architecture Search [2.711812013460678]
We introduce AutoGNNUQ, an automated uncertainty quantification (UQ) approach for molecular property prediction.
Our approach employs variance decomposition to separate data (aleatoric) and model (epistemic) uncertainties, providing valuable insights for reducing them.
AutoGNNUQ has broad applicability in domains such as drug discovery and materials science, where accurate uncertainty quantification is crucial for decision-making.
arXiv Detail & Related papers (2023-07-19T20:03:42Z) - Clarifying Trust of Materials Property Predictions using Neural Networks
with Distribution-Specific Uncertainty Quantification [16.36620228609086]
Uncertainty (UQ) methods allow estimation of the trustworthiness of machine learning (ML) model predictions.
Here, we investigate different UQ methods applied to predict energies of molecules on alloys from the Open Catalyst 2020 dataset.
Evidential regression is demonstrated to be a powerful approach for rapidly obtaining, competitively trustworthy UQ estimates.
arXiv Detail & Related papers (2023-02-06T07:03:02Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Learnable Uncertainty under Laplace Approximations [65.24701908364383]
We develop a formalism to explicitly "train" the uncertainty in a decoupled way to the prediction itself.
We show that such units can be trained via an uncertainty-aware objective, improving standard Laplace approximations' performance.
arXiv Detail & Related papers (2020-10-06T13:43:33Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Targeted free energy estimation via learned mappings [66.20146549150475]
Free energy perturbation (FEP) was proposed by Zwanzig more than six decades ago as a method to estimate free energy differences.
FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions.
One strategy to mitigate this problem, called Targeted Free Energy Perturbation, uses a high-dimensional mapping in configuration space to increase overlap.
arXiv Detail & Related papers (2020-02-12T11:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.