Retrosynthesis Prediction with Local Template Retrieval
- URL: http://arxiv.org/abs/2306.04123v1
- Date: Wed, 7 Jun 2023 03:38:03 GMT
- Title: Retrosynthesis Prediction with Local Template Retrieval
- Authors: Shufang Xie, Rui Yan, Junliang Guo, Yingce Xia, Lijun Wu, Tao Qin
- Abstract summary: Retrosynthesis, which predicts the reactants of a given target molecule, is an essential task for drug discovery.
In this work, we introduce RetroKNN, a local reaction template retrieval method.
We conduct comprehensive experiments on two widely used benchmarks, the USPTO-50K and USPTO-MIT.
- Score: 112.23386062396622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrosynthesis, which predicts the reactants of a given target molecule, is
an essential task for drug discovery. In recent years, the machine learing
based retrosynthesis methods have achieved promising results. In this work, we
introduce RetroKNN, a local reaction template retrieval method to further boost
the performance of template-based systems with non-parametric retrieval. We
first build an atom-template store and a bond-template store that contain the
local templates in the training data, then retrieve from these templates with a
k-nearest-neighbor (KNN) search during inference. The retrieved templates are
combined with neural network predictions as the final output. Furthermore, we
propose a lightweight adapter to adjust the weights when combing neural network
and KNN predictions conditioned on the hidden representation and the retrieved
templates. We conduct comprehensive experiments on two widely used benchmarks,
the USPTO-50K and USPTO-MIT. Especially for the top-1 accuracy, we improved
7.1% on the USPTO-50K dataset and 12.0% on the USPTO-MIT dataset. These results
demonstrate the effectiveness of our method.
Related papers
- UAlign: Pushing the Limit of Template-free Retrosynthesis Prediction with Unsupervised SMILES Alignment [51.49238426241974]
This paper introduces UAlign, a template-free graph-to-sequence pipeline for retrosynthesis prediction.
By combining graph neural networks and Transformers, our method can more effectively leverage the inherent graph structure of molecules.
arXiv Detail & Related papers (2024-03-25T03:23:03Z) - Retrosynthesis prediction enhanced by in-silico reaction data
augmentation [66.5643280109899]
We present RetroWISE, a framework that employs a base model inferred from real paired data to perform in-silico reaction generation and augmentation.
On three benchmark datasets, RetroWISE achieves the best overall performance against state-of-the-art models.
arXiv Detail & Related papers (2024-01-31T07:40:37Z) - G-MATT: Single-step Retrosynthesis Prediction using Molecular Grammar
Tree Transformer [0.0]
We propose a chemistry-aware retrosynthesis prediction framework that combines powerful data-driven models with prior domain knowledge.
The proposed framework, grammar-based molecular attention tree transformer (G-MATT), achieves significant performance improvements compared to baseline retrosynthesis models.
arXiv Detail & Related papers (2023-05-04T21:04:19Z) - SemiRetro: Semi-template framework boosts deep retrosynthesis prediction [38.42917984016527]
template-based (TB) and template-free (TF) molecule graph learning methods have shown promising results to retrosynthesis.
We propose breaking a full-template into several semi-templates and embedding them into the two-step TF framework.
Experimental results show that SemiRetro significantly outperforms both existing TB and TF methods.
arXiv Detail & Related papers (2022-02-12T00:38:11Z) - RetroComposer: Discovering Novel Reactions by Composing Templates for
Retrosynthesis Prediction [63.14937611038264]
We propose an innovative retrosynthesis prediction framework that can compose novel templates beyond training templates.
Experimental results show that our method can produce novel templates for 328 test reactions in the USPTO-50K dataset.
arXiv Detail & Related papers (2021-12-20T05:57:07Z) - Permutation invariant graph-to-sequence model for template-free
retrosynthesis and reaction prediction [2.5655440962401617]
We describe a novel Graph2SMILES model that combines the power of Transformer models for text generation with the permutation invariance of molecular graph encoders.
As an end-to-end architecture, Graph2SMILES can be used as a drop-in replacement for the Transformer in any task involving molecule(s)-to-molecule(s) transformations.
arXiv Detail & Related papers (2021-10-19T01:23:15Z) - RetCL: A Selection-based Approach for Retrosynthesis via Contrastive
Learning [107.64562550844146]
Retrosynthesis is an emerging research area of deep learning.
We propose a new approach that reformulating retrosynthesis into a selection problem of reactants from a candidate set of commercially available molecules.
For learning the score functions, we also propose a novel contrastive training scheme with hard negative mining.
arXiv Detail & Related papers (2021-05-03T12:47:57Z) - State-of-the-Art Augmented NLP Transformer models for direct and
single-step retrosynthesis [0.0]
We investigated the effect of different training scenarios on predicting retrosynthesis of chemical compounds.
Data augmentation, which is a powerful method used in image processing, eliminated the effect of data memorization by neural networks.
arXiv Detail & Related papers (2020-03-05T18:11:11Z) - Assessing Graph-based Deep Learning Models for Predicting Flash Point [52.931492216239995]
Graph-based deep learning (GBDL) models were implemented in predicting flash point for the first time.
Average R2 and Mean Absolute Error (MAE) scores of MPNN are, respectively, 2.3% lower and 2.0 K higher than previous comparable studies.
arXiv Detail & Related papers (2020-02-26T06:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.