Permutation invariant graph-to-sequence model for template-free
retrosynthesis and reaction prediction
- URL: http://arxiv.org/abs/2110.09681v1
- Date: Tue, 19 Oct 2021 01:23:15 GMT
- Title: Permutation invariant graph-to-sequence model for template-free
retrosynthesis and reaction prediction
- Authors: Zhengkai Tu, Connor W. Coley
- Abstract summary: We describe a novel Graph2SMILES model that combines the power of Transformer models for text generation with the permutation invariance of molecular graph encoders.
As an end-to-end architecture, Graph2SMILES can be used as a drop-in replacement for the Transformer in any task involving molecule(s)-to-molecule(s) transformations.
- Score: 2.5655440962401617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesis planning and reaction outcome prediction are two fundamental
problems in computer-aided organic chemistry for which a variety of data-driven
approaches have emerged. Natural language approaches that model each problem as
a SMILES-to-SMILES translation lead to a simple end-to-end formulation, reduce
the need for data preprocessing, and enable the use of well-optimized machine
translation model architectures. However, SMILES representations are not an
efficient representation for capturing information about molecular structures,
as evidenced by the success of SMILES augmentation to boost empirical
performance. Here, we describe a novel Graph2SMILES model that combines the
power of Transformer models for text generation with the permutation invariance
of molecular graph encoders that mitigates the need for input data
augmentation. As an end-to-end architecture, Graph2SMILES can be used as a
drop-in replacement for the Transformer in any task involving
molecule(s)-to-molecule(s) transformations. In our encoder, an
attention-augmented directed message passing neural network (D-MPNN) captures
local chemical environments, and the global attention encoder allows for
long-range and intermolecular interactions, enhanced by graph-aware positional
embedding. Graph2SMILES improves the top-1 accuracy of the Transformer
baselines by $1.7\%$ and $1.9\%$ for reaction outcome prediction on USPTO_480k
and USPTO_STEREO datasets respectively, and by $9.8\%$ for one-step
retrosynthesis on the USPTO_50k dataset.
Related papers
- Pre-trained Molecular Language Models with Random Functional Group Masking [54.900360309677794]
We propose a SMILES-based underlineem Molecular underlineem Language underlineem Model, which randomly masking SMILES subsequences corresponding to specific molecular atoms.
This technique aims to compel the model to better infer molecular structures and properties, thus enhancing its predictive capabilities.
arXiv Detail & Related papers (2024-11-03T01:56:15Z) - Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors [16.04850782310842]
We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms.
A normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers.
arXiv Detail & Related papers (2024-06-06T14:01:28Z) - Synergistic Fusion of Graph and Transformer Features for Enhanced
Molecular Property Prediction [0.0]
We propose a novel approach that combines pre-trained features from GNNs and Transformers.
This approach provides a comprehensive molecular representation, capturing both the global molecule structure and the individual atom characteristics.
Experimental results on MoleculeNet benchmarks demonstrate superior performance, surpassing previous models in 5 out of 7 classification datasets and 4 out of 6 regression datasets.
arXiv Detail & Related papers (2023-08-25T14:47:46Z) - Dynamic Molecular Graph-based Implementation for Biophysical Properties
Prediction [9.112532782451233]
We propose a novel approach based on the transformer model utilizing GNNs for characterizing dynamic features of protein-ligand interactions.
Our message passing transformer pre-trains on a set of molecular dynamic data based off of physics-based simulations to learn coordinate construction and make binding probability and affinity predictions.
arXiv Detail & Related papers (2022-12-20T04:21:19Z) - G2GT: Retrosynthesis Prediction with Graph to Graph Attention Neural
Network and Self-Training [0.0]
Retrosynthesis prediction is one of the fundamental challenges in organic chemistry and related fields.
We propose a new graph-to-graph transformation model, G2GT, in which the graph encoder and graph decoder are built upon the standard transformer structure.
We show that self-training, a powerful data augmentation method, can significantly improve the model's performance.
arXiv Detail & Related papers (2022-04-19T01:55:52Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Learning Attributed Graph Representations with Communicative Message
Passing Transformer [3.812358821429274]
We propose a Communicative Message Passing Transformer (CoMPT) neural network to improve the molecular graph representation.
Unlike the previous transformer-style GNNs that treat molecules as fully connected graphs, we introduce a message diffusion mechanism to leverage the graph connectivity inductive bias.
arXiv Detail & Related papers (2021-07-19T11:58:32Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Self-Supervised Graph Transformer on Large-Scale Molecular Data [73.3448373618865]
We propose a novel framework, GROVER, for molecular representation learning.
GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data.
We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning.
arXiv Detail & Related papers (2020-06-18T08:37:04Z) - Retrosynthesis Prediction with Conditional Graph Logic Network [118.70437805407728]
Computer-aided retrosynthesis is finding renewed interest from both chemistry and computer science communities.
We propose a new approach to this task using the Conditional Graph Logic Network, a conditional graphical model built upon graph neural networks.
arXiv Detail & Related papers (2020-01-06T05:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.