3DMolNet: A Generative Network for Molecular Structures
- URL: http://arxiv.org/abs/2010.06477v1
- Date: Thu, 8 Oct 2020 13:04:36 GMT
- Title: 3DMolNet: A Generative Network for Molecular Structures
- Authors: Vitali Nesterov, Mario Wieser, Volker Roth
- Abstract summary: We propose a new approach to efficiently generate molecular structures that are not restricted to a fixed size or composition.
Our model is based on the variational autoencoder which learns a translation-, rotation-, and permutation-invariant low-dimensional representation of molecules.
The compositional and structural validity of newly generated molecules has been confirmed by quantum chemical methods.
- Score: 5.446536331020099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the recent advances in machine learning for quantum chemistry, it is now
possible to predict the chemical properties of compounds and to generate novel
molecules. Existing generative models mostly use a string- or graph-based
representation, but the precise three-dimensional coordinates of the atoms are
usually not encoded. First attempts in this direction have been proposed, where
autoregressive or GAN-based models generate atom coordinates. Those either lack
a latent space in the autoregressive setting, such that a smooth exploration of
the compound space is not possible, or cannot generalize to varying chemical
compositions. We propose a new approach to efficiently generate molecular
structures that are not restricted to a fixed size or composition. Our model is
based on the variational autoencoder which learns a translation-, rotation-,
and permutation-invariant low-dimensional representation of molecules. Our
experiments yield a mean reconstruction error below 0.05 Angstrom,
outperforming the current state-of-the-art methods by a factor of four, and
which is even lower than the spatial quantization error of most chemical
descriptors. The compositional and structural validity of newly generated
molecules has been confirmed by quantum chemical methods in a set of
experiments.
Related papers
- GraphXForm: Graph transformer for computer-aided molecular design with application to extraction [73.1842164721868]
We present GraphXForm, a decoder-only graph transformer architecture, which is pretrained on existing compounds and then fine-tuned.
We evaluate it on two solvent design tasks for liquid-liquid extraction, showing that it outperforms four state-of-the-art molecular design techniques.
arXiv Detail & Related papers (2024-11-03T19:45:15Z) - MUDiff: Unified Diffusion for Complete Molecule Generation [104.7021929437504]
We present a new model for generating a comprehensive representation of molecules, including atom features, 2D discrete molecule structures, and 3D continuous molecule coordinates.
We propose a novel graph transformer architecture to denoise the diffusion process.
Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.
arXiv Detail & Related papers (2023-04-28T04:25:57Z) - Molecular Geometry-aware Transformer for accurate 3D Atomic System
modeling [51.83761266429285]
We propose a novel Transformer architecture that takes nodes (atoms) and edges (bonds and nonbonding atom pairs) as inputs and models the interactions among them.
Moleformer achieves state-of-the-art on the initial state to relaxed energy prediction of OC20 and is very competitive in QM9 on predicting quantum chemical properties.
arXiv Detail & Related papers (2023-02-02T03:49:57Z) - Exploring Chemical Space with Score-based Out-of-distribution Generation [57.15855198512551]
We propose a score-based diffusion scheme that incorporates out-of-distribution control in the generative differential equation (SDE)
Since some novel molecules may not meet the basic requirements of real-world drugs, MOOD performs conditional generation by utilizing the gradients from a property predictor.
We experimentally validate that MOOD is able to explore the chemical space beyond the training distribution, generating molecules that outscore ones found with existing methods, and even the top 0.01% of the original training pool.
arXiv Detail & Related papers (2022-06-06T06:17:11Z) - Scalable Fragment-Based 3D Molecular Design with Reinforcement Learning [68.8204255655161]
We introduce a novel framework for scalable 3D design that uses a hierarchical agent to build molecules.
In a variety of experiments, we show that our agent, guided only by energy considerations, can efficiently learn to produce molecules with over 100 atoms.
arXiv Detail & Related papers (2022-02-01T18:54:24Z) - Geometric Transformer for End-to-End Molecule Properties Prediction [92.28929858529679]
We introduce a Transformer-based architecture for molecule property prediction, which is able to capture the geometry of the molecule.
We modify the classical positional encoder by an initial encoding of the molecule geometry, as well as a learned gated self-attention mechanism.
arXiv Detail & Related papers (2021-10-26T14:14:40Z) - Inverse design of 3d molecular structures with conditional generative
neural networks [2.7998963147546148]
We propose a conditional generative neural network for 3d molecular structures with specified structural and chemical properties.
This approach is agnostic to chemical bonding and enables targeted sampling of novel molecules from conditional distributions.
arXiv Detail & Related papers (2021-09-10T12:12:38Z) - Learning Latent Space Energy-Based Prior Model for Molecule Generation [59.875533935578375]
We learn latent space energy-based prior model with SMILES representation for molecule modeling.
Our method is able to generate molecules with validity and uniqueness competitive with state-of-the-art models.
arXiv Detail & Related papers (2020-10-19T09:34:20Z) - Learning a Continuous Representation of 3D Molecular Structures with
Deep Generative Models [0.0]
Generative models are an entirely different approach that learn to represent and optimize molecules in a continuous latent space.
We describe deep generative models of three dimensional molecular structures using atomic density grids.
We are also able to sample diverse sets of molecules based on a given input compound to increase the probability of creating valid, drug-like molecules.
arXiv Detail & Related papers (2020-10-17T01:15:47Z) - Generating 3D Molecular Structures Conditional on a Receptor Binding
Site with Deep Generative Models [0.0]
We describe for the first time a deep generative model that can generate 3D structures conditioned on a three-dimensional molecular binding pocket.
We show that valid and unique molecules can be readily sampled from the variational latent space defined by a reference seed' structure.
arXiv Detail & Related papers (2020-10-16T16:27:47Z) - End-to-End Differentiable Molecular Mechanics Force Field Construction [0.5269923665485903]
We propose an alternative approach that uses graph neural networks to perceive chemical environments.
The entire process is modular and end-to-end differentiable with respect to model parameters.
We show that this approach is not only sufficiently to reproduce legacy atom types, but that it can learn to accurately reproduce and extend existing molecular mechanics force fields.
arXiv Detail & Related papers (2020-10-02T20:59:46Z) - A deep neural network for molecular wave functions in quasi-atomic
minimal basis representation [0.0]
We present an adaptation of the recently proposed SchNet for Orbitals (SchNOrb) deep convolutional neural network model [Nature Commun 10, 5024] for electronic wave functions in an optimised quasi-atomic minimal basis representation.
For five organic molecules ranging from 5 to 13 heavy atoms, the model accurately predicts molecular orbital energies and wavefunctions and provides access to derived properties for chemical bonding analysis.
arXiv Detail & Related papers (2020-05-11T06:55:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.