Text-Guided Molecule Generation with Diffusion Language Model
- URL: http://arxiv.org/abs/2402.13040v1
- Date: Tue, 20 Feb 2024 14:29:02 GMT
- Title: Text-Guided Molecule Generation with Diffusion Language Model
- Authors: Haisong Gong, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: We propose the Text-Guided Molecule Generation with Diffusion Language Model (TGM-DLM)
TGM-DLM updates token embeddings within the SMILES string collectively and iteratively, using a two-phase diffusion generation process.
We demonstrate that TGM-DLM outperforms MolT5-Base, an autoregressive model, without the need for additional data resources.
- Score: 23.170313481324598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-guided molecule generation is a task where molecules are generated to
match specific textual descriptions. Recently, most existing SMILES-based
molecule generation methods rely on an autoregressive architecture. In this
work, we propose the Text-Guided Molecule Generation with Diffusion Language
Model (TGM-DLM), a novel approach that leverages diffusion models to address
the limitations of autoregressive methods. TGM-DLM updates token embeddings
within the SMILES string collectively and iteratively, using a two-phase
diffusion generation process. The first phase optimizes embeddings from random
noise, guided by the text description, while the second phase corrects invalid
SMILES strings to form valid molecular representations. We demonstrate that
TGM-DLM outperforms MolT5-Base, an autoregressive model, without the need for
additional data resources. Our findings underscore the remarkable effectiveness
of TGM-DLM in generating coherent and precise molecules with specific
properties, opening new avenues in drug discovery and related scientific
domains. Code will be released at: https://github.com/Deno-V/tgm-dlm.
Related papers
- Text-Guided Multi-Property Molecular Optimization with a Diffusion Language Model [77.50732023411811]
We propose a text-guided multi-property molecular optimization method utilizing transformer-based diffusion language model (TransDLM)
TransDLM leverages standardized chemical nomenclature as semantic representations of molecules and implicitly embeds property requirements into textual descriptions.
Our approach surpasses state-of-the-art methods in optimizing molecular structural similarity and enhancing chemical properties on the benchmark dataset.
arXiv Detail & Related papers (2024-10-17T14:30:27Z) - Instruction-Based Molecular Graph Generation with Unified Text-Graph Diffusion Model [22.368332915420606]
Unified Text-Graph Diffusion Model (UTGDiff) is a framework to generate molecular graphs from instructions.
UTGDiff features a unified text-graph transformer as the denoising network, derived from pre-trained language models.
Our experimental results demonstrate that UTGDiff consistently outperforms sequence-based baselines in tasks involving instruction-based molecule generation and editing.
arXiv Detail & Related papers (2024-08-19T11:09:15Z) - Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - LDMol: Text-to-Molecule Diffusion Model with Structurally Informative Latent Space [55.5427001668863]
We present a novel latent diffusion model dubbed LDMol for text-conditioned molecule generation.
LDMol comprises a molecule autoencoder that produces a learnable and structurally informative feature space.
We show that LDMol can be applied to downstream tasks such as molecule-to-text retrieval and text-guided molecule editing.
arXiv Detail & Related papers (2024-05-28T04:59:13Z) - Data-Efficient Molecular Generation with Hierarchical Textual Inversion [48.816943690420224]
We introduce Hierarchical textual Inversion for Molecular generation (HI-Mol), a novel data-efficient molecular generation method.
HI-Mol is inspired by the importance of hierarchical information, e.g., both coarse- and fine-grained features, in understanding the molecule distribution.
Compared to the conventional textual inversion method in the image domain using a single-level token embedding, our multi-level token embeddings allow the model to effectively learn the underlying low-shot molecule distribution.
arXiv Detail & Related papers (2024-05-05T08:35:23Z) - MolXPT: Wrapping Molecules with Text for Generative Pre-training [141.0924452870112]
MolXPT is a unified language model of text and molecules pre-trained on SMILES wrapped by text.
MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet.
arXiv Detail & Related papers (2023-05-18T03:58:19Z) - TESS: Text-to-Text Self-Conditioned Simplex Diffusion [56.881170312435444]
Text-to-text Self-conditioned Simplex Diffusion employs a new form of self-conditioning, and applies the diffusion process on the logit simplex space rather than the learned embedding space.
We demonstrate that TESS outperforms state-of-the-art non-autoregressive models, requires fewer diffusion steps with minimal drop in performance, and is competitive with pretrained autoregressive sequence-to-sequence models.
arXiv Detail & Related papers (2023-05-15T06:33:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.