Learning Latent Space Energy-Based Prior Model for Molecule Generation
- URL: http://arxiv.org/abs/2010.09351v1
- Date: Mon, 19 Oct 2020 09:34:20 GMT
- Title: Learning Latent Space Energy-Based Prior Model for Molecule Generation
- Authors: Bo Pang, Tian Han, Ying Nian Wu
- Abstract summary: We learn latent space energy-based prior model with SMILES representation for molecule modeling.
Our method is able to generate molecules with validity and uniqueness competitive with state-of-the-art models.
- Score: 59.875533935578375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models have recently been applied to molecule design. If the
molecules are encoded in linear SMILES strings, modeling becomes convenient.
However, models relying on string representations tend to generate invalid
samples and duplicates. Prior work addressed these issues by building models on
chemically-valid fragments or explicitly enforcing chemical rules in the
generation process. We argue that an expressive model is sufficient to
implicitly and automatically learn the complicated chemical rules from the
data, even if molecules are encoded in simple character-level SMILES strings.
We propose to learn latent space energy-based prior model with SMILES
representation for molecule modeling. Our experiments show that our method is
able to generate molecules with validity and uniqueness competitive with
state-of-the-art models. Interestingly, generated molecules have structural and
chemical features whose distributions almost perfectly match those of the real
molecules.
Related papers
- Pre-trained Molecular Language Models with Random Functional Group Masking [54.900360309677794]
We propose a SMILES-based underlineem Molecular underlineem Language underlineem Model, which randomly masking SMILES subsequences corresponding to specific molecular atoms.
This technique aims to compel the model to better infer molecular structures and properties, thus enhancing its predictive capabilities.
arXiv Detail & Related papers (2024-11-03T01:56:15Z) - Chemical Language Model Linker: blending text and molecules with modular adapters [2.2667044928324747]
We propose a lightweight adapter-based strategy named Chemical Language Model Linker (ChemLML)
ChemLML blends the two single domain models and obtains conditional molecular generation from text descriptions.
We find that the choice of molecular representation used within ChemLML, SMILES versus SELFIES, has a strong influence on conditional molecular generation performance.
arXiv Detail & Related papers (2024-10-26T13:40:13Z) - LDMol: Text-to-Molecule Diffusion Model with Structurally Informative Latent Space [55.5427001668863]
We present a novel latent diffusion model dubbed LDMol for text-conditioned molecule generation.
LDMol comprises a molecule autoencoder that produces a learnable and structurally informative feature space.
We show that LDMol can be applied to downstream tasks such as molecule-to-text retrieval and text-guided molecule editing.
arXiv Detail & Related papers (2024-05-28T04:59:13Z) - Molecule Design by Latent Space Energy-Based Modeling and Gradual
Distribution Shifting [53.44684898432997]
Generation of molecules with desired chemical and biological properties is critical for drug discovery.
We propose a probabilistic generative model to capture the joint distribution of molecules and their properties.
Our method achieves very strong performances on various molecule design tasks.
arXiv Detail & Related papers (2023-06-09T03:04:21Z) - MolCPT: Molecule Continuous Prompt Tuning to Generalize Molecular
Representation Learning [77.31492888819935]
We propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT)
MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt.
Experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction.
arXiv Detail & Related papers (2022-12-20T19:32:30Z) - A Molecular Multimodal Foundation Model Associating Molecule Graphs with
Natural Language [63.60376252491507]
We propose a molecular multimodal foundation model which is pretrained from molecular graphs and their semantically related textual data.
We believe that our model would have a broad impact on AI-empowered fields across disciplines such as biology, chemistry, materials, environment, and medicine.
arXiv Detail & Related papers (2022-09-12T00:56:57Z) - De Novo Molecular Generation with Stacked Adversarial Model [24.83456726428956]
Conditional generative adversarial models have recently been proposed as promising approaches for de novo drug design.
We propose a new generative model which extends an existing adversarial autoencoder based model by stacking two models together.
Our stacked approach generates more valid molecules, as well as molecules that are more similar to known drugs.
arXiv Detail & Related papers (2021-10-24T14:23:16Z) - Barking up the right tree: an approach to search over molecule synthesis
DAGs [28.13323960125482]
Current deep generative models for molecules ignore synthesizability.
We propose a deep generative model that better represents the real world process.
We show that our approach is able to model chemical space well, producing a wide range of diverse molecules.
arXiv Detail & Related papers (2020-12-21T17:35:06Z) - Learning a Continuous Representation of 3D Molecular Structures with
Deep Generative Models [0.0]
Generative models are an entirely different approach that learn to represent and optimize molecules in a continuous latent space.
We describe deep generative models of three dimensional molecular structures using atomic density grids.
We are also able to sample diverse sets of molecules based on a given input compound to increase the probability of creating valid, drug-like molecules.
arXiv Detail & Related papers (2020-10-17T01:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.