Learned Force Fields Are Ready For Ground State Catalyst Discovery
- URL: http://arxiv.org/abs/2209.12466v1
- Date: Mon, 26 Sep 2022 07:16:43 GMT
- Title: Learned Force Fields Are Ready For Ground State Catalyst Discovery
- Authors: Michael Schaarschmidt, Morgane Riviere, Alex M. Ganose, James S.
Spencer, Alexander L. Gaunt, James Kirkpatrick, Simon Axelrod, Peter W.
Battaglia, Jonathan Godwin
- Abstract summary: We present evidence that learned density functional theory (DFT'') force fields are ready for ground state catalyst discovery.
Key finding is that relaxation using forces from a learned potential yields structures with similar or lower energy to those relaxed using the RPBE functional in over 50% of evaluated systems.
We show that a force field trained on a locally harmonic energy surface with the same minima as a target DFT energy is also able to find lower or similar energy structures in over 50% of cases.
- Score: 60.41853574951094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present evidence that learned density functional theory (``DFT'') force
fields are ready for ground state catalyst discovery. Our key finding is that
relaxation using forces from a learned potential yields structures with similar
or lower energy to those relaxed using the RPBE functional in over 50\% of
evaluated systems, despite the fact that the predicted forces differ
significantly from the ground truth. This has the surprising implication that
learned potentials may be ready for replacing DFT in challenging catalytic
systems such as those found in the Open Catalyst 2020 dataset. Furthermore, we
show that a force field trained on a locally harmonic energy surface with the
same minima as a target DFT energy is also able to find lower or similar energy
structures in over 50\% of cases. This ``Easy Potential'' converges in fewer
steps than a standard model trained on true energies and forces, which further
accelerates calculations. Its success illustrates a key point: learned
potentials can locate energy minima even when the model has high force errors.
The main requirement for structure optimisation is simply that the learned
potential has the correct minima. Since learned potentials are fast and scale
linearly with system size, our results open the possibility of quickly finding
ground states for large systems.
Related papers
- Lightweight Geometric Deep Learning for Molecular Modelling in Catalyst Discovery [0.0]
Open Catalyst Project aims to apply advances in graph neural networks (GNNs) to accelerate progress in catalyst discovery.
By implementing robust design patterns like geometric and symmetric message passing, we were able to train a GNN model that reached a MAE of 0.0748 in predicting the per-atom forces of adsorbate-surface interactions.
arXiv Detail & Related papers (2024-04-05T17:13:51Z) - Investigating the Behavior of Diffusion Models for Accelerating
Electronic Structure Calculations [24.116064925926914]
Investigation driven by their potential to significantly accelerate electronic structure calculations using machine learning.
We show that the model learns about the first-order structure of the potential energy surface, and then later learns about higher-order structure.
For structure relaxations, the model finds geometries with 10x lower energy than those produced by a classical force field for small organic molecules.
arXiv Detail & Related papers (2023-11-02T17:58:37Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - On the importance of catalyst-adsorbate 3D interactions for relaxed
energy predictions [98.70797778496366]
We investigate whether it is possible to predict a system's relaxed energy in the OC20 dataset while ignoring the relative position of the adsorbate.
We find that while removing binding site information impairs accuracy as expected, modified models are able to predict relaxed energies with remarkably decent MAE.
arXiv Detail & Related papers (2023-10-10T14:57:04Z) - KineticNet: Deep learning a transferable kinetic energy functional for
orbital-free density functional theory [13.437597619451568]
KineticNet is an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids.
For the first time, chemical accuracy of the learned functionals is achieved across input densities and geometries of tiny molecules.
arXiv Detail & Related papers (2023-05-08T17:43:31Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated
Catalyst Design [102.9593507372373]
Catalyst materials play a crucial role in the electrochemical reactions involved in industrial processes.
Machine learning holds the potential to efficiently model materials properties from large amounts of data.
We propose task-specific innovations applicable to most architectures, enhancing both computational efficiency and accuracy.
arXiv Detail & Related papers (2022-11-22T05:24:30Z) - NeuralNEB -- Neural Networks can find Reaction Paths Fast [7.7365628406567675]
Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
arXiv Detail & Related papers (2022-07-20T15:29:45Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - Graph Neural Network for Metal Organic Framework Potential Energy
Approximation [0.4588028371034407]
Metal-organic frameworks (MOFs) are nanoporous compounds composed of metal ions and organic linkers.
We propose a machine learning approach for estimating potential energy of candidate MOFs using a graph neural network.
We generate a database of 50,000 spatial configurations and high-quality potential energy values using DFT.
arXiv Detail & Related papers (2020-10-29T19:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.