Machine learning models predict calculation outcomes with the
transferability necessary for computational catalysis
- URL: http://arxiv.org/abs/2203.01276v1
- Date: Wed, 2 Mar 2022 18:02:12 GMT
- Title: Machine learning models predict calculation outcomes with the
transferability necessary for computational catalysis
- Authors: Chenru Duan, Aditya Nandy, Husain Adamji, Yuriy Roman-Leshkov, and
Heather J. Kulik
- Abstract summary: Virtual high throughput screening (VHTS) and machine learning (ML) have greatly accelerated the design of single-site transition-metal catalysts.
We show that a convolutional neural network that monitors geometry optimization on the fly can exploit its good performance and transferability for catalyst design.
We rationalize this superior model transferability to the use of on-the-fly electronic structure and geometric information generated from density functional theory calculations.
- Score: 0.4063872661554894
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Virtual high throughput screening (VHTS) and machine learning (ML) have
greatly accelerated the design of single-site transition-metal catalysts. VHTS
of catalysts, however, is often accompanied with high calculation failure rate
and wasted computational resources due to the difficulty of simultaneously
converging all mechanistically relevant reactive intermediates to expected
geometries and electronic states. We demonstrate a dynamic classifier approach,
i.e., a convolutional neural network that monitors geometry optimization on the
fly, and exploit its good performance and transferability for catalyst design.
We show that the dynamic classifier performs well on all reactive intermediates
in the representative catalytic cycle of the radical rebound mechanism for
methane-to-methanol despite being trained on only one reactive intermediate.
The dynamic classifier also generalizes to chemically distinct intermediates
and metal centers absent from the training data without loss of accuracy or
model confidence. We rationalize this superior model transferability to the use
of on-the-fly electronic structure and geometric information generated from
density functional theory calculations and the convolutional layer in the
dynamic classifier. Combined with model uncertainty quantification, the dynamic
classifier saves more than half of the computational resources that would have
been wasted on unsuccessful calculations for all reactive intermediates being
considered.
Related papers
- chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - A Machine Learning and Explainable AI Framework Tailored for Unbalanced Experimental Catalyst Discovery [10.92613600218535]
We introduce a robust machine learning and explainable AI (XAI) framework to accurately classify the catalytic yield of various compositions.
This framework combines a series of ML practices designed to handle the scarcity and imbalance of catalyst data.
We believe that such insights can assist chemists in the development and identification of novel catalysts with superior performance.
arXiv Detail & Related papers (2024-07-10T13:09:53Z) - Transfer learning for atomistic simulations using GNNs and kernel mean
embeddings [24.560340485988128]
We propose a transfer learning algorithm that leverages the ability of graph neural networks (GNNs) to represent chemical environments together with kernel mean embeddings.
We test our approach on a series of realistic datasets of increasing complexity, showing excellent generalization and transferability performance.
arXiv Detail & Related papers (2023-06-02T14:58:16Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Adaptive physics-informed neural operator for coarse-grained
non-equilibrium flows [0.0]
The framework combines dimensionality reduction and neural operators through a hierarchical and adaptive deep learning strategy.
The proposed surrogate's architecture is structured as a tree, with leaf nodes representing separate neural operator blocks.
In 0-D scenarios, the proposed ML framework can adaptively predict the dynamics of almost thirty species with a maximum relative error of 4.5%.
arXiv Detail & Related papers (2022-10-27T23:26:57Z) - Toward Development of Machine Learned Techniques for Production of
Compact Kinetic Models [0.0]
Chemical kinetic models are an essential component in the development and optimisation of combustion devices.
We present a novel automated compute intensification methodology to produce overly-reduced and optimised chemical kinetic models.
arXiv Detail & Related papers (2022-02-16T12:31:24Z) - Model based Multi-agent Reinforcement Learning with Tensor
Decompositions [52.575433758866936]
This paper investigates generalisation in state-action space over unexplored state-action pairs by modelling the transition and reward functions as tensors of low CP-rank.
Experiments on synthetic MDPs show that using tensor decompositions in a model-based reinforcement learning algorithm can lead to much faster convergence if the true transition and reward functions are indeed of low rank.
arXiv Detail & Related papers (2021-10-27T15:36:25Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.