LTAU-FF: Loss Trajectory Analysis for Uncertainty in Atomistic Force Fields
- URL: http://arxiv.org/abs/2402.00853v2
- Date: Wed, 22 May 2024 15:23:36 GMT
- Title: LTAU-FF: Loss Trajectory Analysis for Uncertainty in Atomistic Force Fields
- Authors: Joshua A. Vita, Amit Samanta, Fei Zhou, Vincenzo Lordi,
- Abstract summary: Model ensembles are effective tools for estimating prediction uncertainty in deep learning atomistic force fields.
However, their widespread adoption is hindered by high computational costs and overconfident error estimates.
We address these challenges by leveraging distributions of per-sample errors obtained during training and employing a distance-based similarity search in the model latent space.
Our method, which we call LTAU, efficiently estimates the full probability distribution function (PDF) of errors for any test point using the logged training errors.
- Score: 5.396675151318325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model ensembles are effective tools for estimating prediction uncertainty in deep learning atomistic force fields. However, their widespread adoption is hindered by high computational costs and overconfident error estimates. In this work, we address these challenges by leveraging distributions of per-sample errors obtained during training and employing a distance-based similarity search in the model latent space. Our method, which we call LTAU, efficiently estimates the full probability distribution function (PDF) of errors for any test point using the logged training errors, achieving speeds that are 2--3 orders of magnitudes faster than typical ensemble methods and allowing it to be used for tasks where training or evaluating multiple models would be infeasible. We apply LTAU towards estimating parametric uncertainty in atomistic force fields (LTAU-FF), demonstrating that its improved ensemble diversity produces well-calibrated confidence intervals and predicts errors that correlate strongly with the true errors for data near the training domain. Furthermore, we show that the errors predicted by LTAU-FF can be used in practical applications for detecting out-of-domain data, tuning model performance, and predicting failure during simulations. We believe that LTAU will be a valuable tool for uncertainty quantification (UQ) in atomistic force fields and is a promising method that should be further explored in other domains of machine learning.
Related papers
- Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Ensemble models outperform single model uncertainties and predictions
for operator-learning of hypersonic flows [43.148818844265236]
Training scientific machine learning (SciML) models on limited high-fidelity data offers one approach to rapidly predict behaviors for situations that have not been seen before.
High-fidelity data is itself in limited quantity to validate all outputs of the SciML model in unexplored input space.
We extend a DeepONet using three different uncertainty mechanisms: mean-variance estimation, evidential uncertainty, and ensembling.
arXiv Detail & Related papers (2023-10-31T18:07:29Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Active learning for structural reliability analysis with multiple limit
state functions through variance-enhanced PC-Kriging surrogate models [0.0]
Existing active strategies for training surrogate models yield accurate structural reliability estimates.
We investigate the capability of active learning approaches for efficiently selecting training samples under a limited computational budget.
arXiv Detail & Related papers (2023-02-23T15:01:06Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z) - A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs [11.152761263415046]
This paper focuses on understanding how the generalization error scales with the amount of the training data for deep neural networks (DNNs)
We derive estimates of the generalization error that hold for deep networks and do not rely on unattainable capacity measures.
arXiv Detail & Related papers (2021-05-05T05:14:08Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.