Learning Interatomic Potentials at Multiple Scales
- URL: http://arxiv.org/abs/2310.13756v1
- Date: Fri, 20 Oct 2023 18:34:32 GMT
- Title: Learning Interatomic Potentials at Multiple Scales
- Authors: Xiang Fu, Albert Musaelian, Anders Johansson, Tommi Jaakkola, Boris
Kozinsky
- Abstract summary: The need to use a short time step is a key limit on the speed of molecular dynamics (MD) simulations.
This work introduces a method to learn a scale separation in complex interatomic interactions by co-training two MLIPs.
- Score: 1.2162698943818964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need to use a short time step is a key limit on the speed of molecular
dynamics (MD) simulations. Simulations governed by classical potentials are
often accelerated by using a multiple-time-step (MTS) integrator that evaluates
certain potential energy terms that vary more slowly than others less
frequently. This approach is enabled by the simple but limiting analytic forms
of classical potentials. Machine learning interatomic potentials (MLIPs), in
particular recent equivariant neural networks, are much more broadly applicable
than classical potentials and can faithfully reproduce the expensive but
accurate reference electronic structure calculations used to train them. They
still, however, require the use of a single short time step, as they lack the
inherent term-by-term scale separation of classical potentials. This work
introduces a method to learn a scale separation in complex interatomic
interactions by co-training two MLIPs. Initially, a small and efficient model
is trained to reproduce short-time-scale interactions. Subsequently, a large
and expressive model is trained jointly to capture the remaining interactions
not captured by the small model. When running MD, the MTS integrator then
evaluates the smaller model for every time step and the larger model less
frequently, accelerating simulation. Compared to a conventionally trained MLIP,
our approach can achieve a significant speedup (~3x in our experiments) without
a loss of accuracy on the potential energy or simulation-derived quantities.
Related papers
- Generalized quantum master equations can improve the accuracy of semiclassical predictions of multitime correlation functions [0.0]
Multitime quantum correlation functions are central objects in physical science.
Experiments such as 2D spectroscopy and quantum control can now measure such quantities.
We show for the first time that one can exploit a multitime semiclassical GQME to dramatically improve the accuracy of coarse mean-field Ehrenfest dynamics.
arXiv Detail & Related papers (2024-05-14T22:34:59Z) - Machine learning of hidden variables in multiscale fluid simulation [77.34726150561087]
Solving fluid dynamics equations often requires the use of closure relations that account for missing microphysics.
In our study, a partial differential equation simulator that is end-to-end differentiable is used to train judiciously placed neural networks.
We show that this method enables an equation based approach to reproduce non-linear, large Knudsen number plasma physics.
arXiv Detail & Related papers (2023-06-19T06:02:53Z) - Implicit Transfer Operator Learning: Multiple Time-Resolution Surrogates
for Molecular Dynamics [8.35780131268962]
We present Implict Transfer Operator (ITO) Learning, a framework to learn surrogates of the simulation process with multiple time-resolutions.
We also present a coarse-grained CG-SE3-ITO model which can quantitatively model all-atom molecular dynamics.
arXiv Detail & Related papers (2023-05-29T12:19:41Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Accurate Machine Learned Quantum-Mechanical Force Fields for
Biomolecular Simulations [51.68332623405432]
Molecular dynamics (MD) simulations allow atomistic insights into chemical and biological processes.
Recently, machine learned force fields (MLFFs) emerged as an alternative means to execute MD simulations.
This work proposes a general approach to constructing accurate MLFFs for large-scale molecular simulations.
arXiv Detail & Related papers (2022-05-17T13:08:28Z) - Simulate Time-integrated Coarse-grained Molecular Dynamics with
Multi-Scale Graph Networks [4.444748822792469]
Learning-based force fields have made significant progress in accelerating ab-initio MD simulation but are not fast enough for many real-world applications.
We aim to address these challenges by learning a multi-scale graph neural network that directly simulates coarse-grained MD with a very large time step.
arXiv Detail & Related papers (2022-04-21T18:07:08Z) - Fast and Sample-Efficient Interatomic Neural Network Potentials for
Molecules and Materials Based on Gaussian Moments [3.1829446824051195]
We present an improved NN architecture based on the previous GM-NN model.
The improved methodology is a pre-requisite for training-heavy such as active learning or learning-on-the-fly.
arXiv Detail & Related papers (2021-09-20T14:23:34Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate
Interatomic Potentials [0.17590081165362778]
NequIP is a SE(3)-equivariant neural network approach for learning interatomic potentials from ab-initio calculations for molecular dynamics simulations.
The method achieves state-of-the-art accuracy on a challenging set of diverse molecules and materials while exhibiting remarkable data efficiency.
arXiv Detail & Related papers (2021-01-08T18:49:10Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.