NeuralNEB -- Neural Networks can find Reaction Paths Fast
- URL: http://arxiv.org/abs/2207.09971v2
- Date: Thu, 21 Jul 2022 08:47:54 GMT
- Title: NeuralNEB -- Neural Networks can find Reaction Paths Fast
- Authors: Mathias Schreiner, Arghya Bhowmik, Tejs Vegge and Ole Winther
- Abstract summary: Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
- Score: 7.7365628406567675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum mechanical methods like Density Functional Theory (DFT) are used with
great success alongside efficient search algorithms for studying kinetics of
reactive systems. However, DFT is prohibitively expensive for large scale
exploration. Machine Learning (ML) models have turned out to be excellent
emulators of small molecule DFT calculations and could possibly replace DFT in
such tasks. For kinetics, success relies primarily on the models capability to
accurately predict the Potential Energy Surface (PES) around transition-states
and Minimal Energy Paths (MEPs). Previously this has not been possible due to
scarcity of relevant data in the literature. In this paper we train state of
the art equivariant Graph Neural Network (GNN)-based models on around 10.000
elementary reactions from the Transition1x dataset. We apply the models as
potentials for the Nudged Elastic Band (NEB) algorithm and achieve a Mean
Average Error (MAE) of 0.13+/-0.03 eV on barrier energies on unseen reactions.
We compare the results against equivalent models trained on QM9 and ANI1x. We
also compare with and outperform Density Functional based Tight Binding (DFTB)
on both accuracy and computational resource. The implication is that ML models,
given relevant data, are now at a level where they can be applied for
downstream tasks in quantum chemistry transcending prediction of simple
molecular features.
Related papers
- Physically Guided Deep Unsupervised Inversion for 1D Magnetotelluric Models [16.91835461818938]
We present a new deep inversion algorithm guided by physics to estimate 1D Magnetotelluric (MT) models.
Our method employs a differentiable modeling operator that physically guides the cost function minimization.
We test the proposed method with field and synthetic data at different frequencies, demonstrating that the acquisition models are more accurate than other results.
arXiv Detail & Related papers (2024-10-20T04:17:59Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - Accurate machine learning force fields via experimental and simulation
data fusion [0.0]
Machine Learning (ML)-based force fields are attracting ever-increasing interest due to their capacity to span scales of classical interatomic potentials at quantum-level accuracy.
Here we leverage both Density Functional Theory (DFT) calculations and experimentally measured mechanical properties and lattice parameters to train an ML potential of titanium.
We demonstrate that the fused data learning strategy can concurrently satisfy all target objectives, thus resulting in a molecular model of higher accuracy compared to the models trained with a single source data.
arXiv Detail & Related papers (2023-08-17T18:22:19Z) - Transition1x -- a Dataset for Building Generalizable Reactive Machine
Learning Potentials [7.171984408392421]
We present the dataset Transition1x containing 9.6 million Density Functional Theory (DFT) calculations.
We show that ML models cannot learn features in transition-state regions solely by training on hitherto popular benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:30:14Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Simultaneous boundary shape estimation and velocity field de-noising in
Magnetic Resonance Velocimetry using Physics-informed Neural Networks [70.7321040534471]
Magnetic resonance velocimetry (MRV) is a non-invasive technique widely used in medicine and engineering to measure the velocity field of a fluid.
Previous studies have required the shape of the boundary (for example, a blood vessel) to be known a priori.
We present a physics-informed neural network that instead uses the noisy MRV data alone to infer the most likely boundary shape and de-noised velocity field.
arXiv Detail & Related papers (2021-07-16T12:56:09Z) - Rotation Invariant Graph Neural Networks using Spin Convolutions [28.4962005849904]
Machine learning approaches have the potential to approximate Density Functional Theory (DFT) in a computationally efficient manner.
We introduce a novel approach to modeling angular information between sets of neighboring atoms in a graph neural network.
Results are demonstrated on the large-scale Open Catalyst 2020 dataset.
arXiv Detail & Related papers (2021-06-17T14:59:34Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.