Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based
Single-Atom Alloy Catalysts for CO2 Reduction Reaction
- URL: http://arxiv.org/abs/2209.07300v1
- Date: Thu, 15 Sep 2022 13:52:15 GMT
- Title: Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based
Single-Atom Alloy Catalysts for CO2 Reduction Reaction
- Authors: Chen Liang, Bowen Wang, Shaogang Hao, Guangyong Chen, Pheng-Ann Heng
and Xiaolong Zou
- Abstract summary: Graph neural networks (GNNs) have drawn more and more attention from material scientists.
We develop a multi-task (MT) architecture based on DimeNet++ and mixture density networks to improve the performance of such task.
- Score: 61.9212585617803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have drawn more and more attention from material
scientists and demonstrated a high capacity to establish connections between
the structure and properties. However, with only unrelaxed structures provided
as input, few GNN models can predict the thermodynamic properties of relaxed
configurations with an acceptable level of error. In this work, we develop a
multi-task (MT) architecture based on DimeNet++ and mixture density networks to
improve the performance of such task. Taking CO adsorption on Cu-based
single-atom alloy catalysts as an illustration, we show that our method can
reliably estimate CO adsorption energy with a mean absolute error of 0.087 eV
from the initial CO adsorption structures without costly first-principles
calculations. Further, compared to other state-of-the-art GNN methods, our
model exhibits improved generalization ability when predicting catalytic
performance of out-of-domain configurations, built with either unseen substrate
surfaces or doping species. We show that the proposed MT GNN strategy can
facilitate catalyst discovery.
Related papers
- Pushing the Limits of All-Atom Geometric Graph Neural Networks: Pre-Training, Scaling and Zero-Shot Transfer [15.302727191576784]
Geometric graph neural networks (Geom-GNNs) with all-atom information have transformed atomistic simulations.
We study the scaling behaviors of Geom-GNNs under self-supervised pre-training, supervised and unsupervised learning setups.
We show how all-atom graph embedding can be organically combined with other neural architectures to enhance the expressive power.
arXiv Detail & Related papers (2024-10-29T03:07:33Z) - Dumpling GNN: Hybrid GNN Enables Better ADC Payload Activity Prediction Based on Chemical Structure [53.76752789814785]
DumplingGNN is a hybrid Graph Neural Network architecture specifically designed for predicting ADC payload activity based on chemical structure.
We evaluate it on a comprehensive ADC payload dataset focusing on DNA Topoisomerase I inhibitors.
It demonstrates exceptional accuracy (91.48%), sensitivity (95.08%), and specificity (97.54%) on our specialized ADC payload dataset.
arXiv Detail & Related papers (2024-09-23T17:11:04Z) - Adaptive Catalyst Discovery Using Multicriteria Bayesian Optimization with Representation Learning [17.00084254889438]
High-performance catalysts are crucial for sustainable energy conversion and human health.
The discovery of catalysts faces challenges due to the absence of efficient approaches to navigating vast and high-dimensional structure and composition spaces.
arXiv Detail & Related papers (2024-04-18T18:11:06Z) - Adapting OC20-trained EquiformerV2 Models for High-Entropy Materials [0.5812062802134551]
We show the results of adjusting and fine-tuning the pretrained EquiformerV2 model from the Open Catalyst Project.
By applying an energy filter based on the local environment of the binding site the zero-shot inference is markedly improved.
It is also found that EquiformerV2, assuming the role of general machine learning potential, is able to inform a smaller, more focused direct inference model.
arXiv Detail & Related papers (2024-03-14T18:59:54Z) - Boosting Heterogeneous Catalyst Discovery by Structurally Constrained
Deep Learning Models [0.0]
Deep learning approaches such as graph neural networks (GNNs) open new opportunity to significantly extend scope for modelling novel high-performance catalysts.
Here we present embedding improvement for GNN that has been modified by Voronoi tesselation.
We show that a sensible choice of data can decrease the error to values above physically-based 20 meV per atom threshold.
arXiv Detail & Related papers (2022-07-11T17:01:28Z) - Graph Neural Networks for Temperature-Dependent Activity Coefficient
Prediction of Solutes in Ionic Liquids [58.720142291102135]
We present a GNN to predict temperature-dependent infinite dilution ACs of solutes in ILs.
We train the GNN on a database including more than 40,000 AC values and compare it to a state-of-the-art MCM.
The GNN and MCM achieve similar high prediction performance, with the GNN additionally enabling high-quality predictions for ACs of solutions that contain ILs and solutes not considered during training.
arXiv Detail & Related papers (2022-06-23T15:27:29Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Multi-View Graph Neural Networks for Molecular Property Prediction [67.54644592806876]
We present Multi-View Graph Neural Network (MV-GNN), a multi-view message passing architecture.
In MV-GNN, we introduce a shared self-attentive readout component and disagreement loss to stabilize the training process.
We further boost the expressive power of MV-GNN by proposing a cross-dependent message passing scheme.
arXiv Detail & Related papers (2020-05-17T04:46:07Z) - Global Attention based Graph Convolutional Neural Networks for Improved
Materials Property Prediction [8.371766047183739]
We develop a novel model, GATGNN, for predicting inorganic material properties based on graph neural networks.
We show that our method is able to both outperform the previous models' predictions and provide insight into the crystallization of the material.
arXiv Detail & Related papers (2020-03-11T07:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.