Physics-Informed Weakly Supervised Learning for Interatomic Potentials
- URL: http://arxiv.org/abs/2408.05215v1
- Date: Tue, 23 Jul 2024 12:49:04 GMT
- Title: Physics-Informed Weakly Supervised Learning for Interatomic Potentials
- Authors: Makoto Takamoto, Viktor Zaverkin, Mathias Niepert,
- Abstract summary: We introduce a physics-informed, weakly supervised approach for training machine-learned interatomic potentials.
We demonstrate reduced energy and force errors -- often lower by a factor of two -- for various baseline models and benchmark data sets.
- Score: 17.165117198519248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning plays an increasingly important role in computational chemistry and materials science, complementing computationally intensive ab initio and first-principles methods. Despite their utility, machine-learning models often lack generalization capability and robustness during atomistic simulations, yielding unphysical energy and force predictions that hinder their real-world applications. We address this challenge by introducing a physics-informed, weakly supervised approach for training machine-learned interatomic potentials (MLIPs). We introduce two novel loss functions, extrapolating the potential energy via a Taylor expansion and using the concept of conservative forces. Our approach improves the accuracy of MLIPs applied to training tasks with sparse training data sets and reduces the need for pre-training computationally demanding models with large data sets. Particularly, we perform extensive experiments demonstrating reduced energy and force errors -- often lower by a factor of two -- for various baseline models and benchmark data sets. Finally, we show that our approach facilitates MLIPs' training in a setting where the computation of forces is infeasible at the reference level, such as those employing complete-basis-set extrapolation.
Related papers
- Optimal design of experiments in the context of machine-learning inter-atomic potentials: improving the efficiency and transferability of kernel based methods [0.7234862895932991]
Data-driven, machine learning (ML) models of atomistic interactions can relate nuanced aspects of atomic arrangements into predictions of energies and forces.
The main challenge stems from the fact that descriptors of chemical environments are often sparse high-dimensional objects without a well-defined continuous metric.
We will demonstrate that classical concepts of statistical planning of experiments and optimal design can help to mitigate such problems at a relatively low computational cost.
arXiv Detail & Related papers (2024-05-14T14:14:23Z) - A Comparative Study of Machine Learning Models Predicting Energetics of Interacting Defects [5.574191640970887]
We present a comparative study of three different methods to predict the free energy change of systems with interacting defects.
Our findings indicate that the cluster expansion model can achieve precise energetics predictions even with this limited dataset.
This research provide a preliminary evaluation of applying machine learning techniques in imperfect surface systems.
arXiv Detail & Related papers (2024-03-20T02:15:48Z) - Self-Consistency Training for Density-Functional-Theory Hamiltonian Prediction [74.84850523400873]
We show that Hamiltonian prediction possesses a self-consistency principle, based on which we propose self-consistency training.
It enables the model to be trained on a large amount of unlabeled data, hence addresses the data scarcity challenge.
It is more efficient than running DFT to generate labels for supervised training, since it amortizes DFT calculation over a set of queries.
arXiv Detail & Related papers (2024-03-14T16:52:57Z) - Electronic excited states from physically-constrained machine learning [0.0]
We present an integrated modeling approach, in which a symmetry-adapted ML model of an effective Hamiltonian is trained to reproduce electronic excitations from a quantum-mechanical calculation.
The resulting model can make predictions for molecules that are much larger and more complex than those that it is trained on.
arXiv Detail & Related papers (2023-11-01T20:49:59Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - On Efficient Training of Large-Scale Deep Learning Models: A Literature
Review [90.87691246153612]
The field of deep learning has witnessed significant progress, particularly in computer vision (CV), natural language processing (NLP), and speech.
The use of large-scale models trained on vast amounts of data holds immense promise for practical applications.
With the increasing demands on computational capacity, a comprehensive summarization on acceleration techniques of training deep learning models is still much anticipated.
arXiv Detail & Related papers (2023-04-07T11:13:23Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Applying physics-based loss functions to neural networks for improved
generalizability in mechanics problems [3.655021726150368]
Informed Machine Learning (PIML) has gained momentum in the last 5 years with scientists and researchers to utilize the benefits afforded by advances in machine learning.
In this work a new approach to utilizing PIML is discussed that deals with the use of physics-based loss functions.
arXiv Detail & Related papers (2021-04-30T20:31:09Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Simple and efficient algorithms for training machine learning potentials
to force data [2.924868086534434]
Machine learning models, trained on data from ab initio quantum simulations, are yielding molecular dynamics potentials with unprecedented accuracy.
One limiting factor is the quantity of available training data, which can be expensive to obtain.
We present a new algorithm for efficient force training, and benchmark its accuracy by training to forces from real-world datasets for organic chemistry and bulk aluminum.
arXiv Detail & Related papers (2020-06-09T19:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.