Generate Novel Molecules With Target Properties Using Conditional
Generative Models
- URL: http://arxiv.org/abs/2009.12368v2
- Date: Wed, 6 Oct 2021 19:33:02 GMT
- Title: Generate Novel Molecules With Target Properties Using Conditional
Generative Models
- Authors: Abhinav Sagar
- Abstract summary: We present a novel neural network for generating small molecules similar to the ones in the training set.
Our network outperforms previous methods using Molecular weight, LogP and Quantitative Estimation of Drug-likeness as the evaluation metrics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drug discovery using deep learning has attracted a lot of attention of late
as it has obvious advantages like higher efficiency, less manual guessing and
faster process time. In this paper, we present a novel neural network for
generating small molecules similar to the ones in the training set. Our network
consists of an encoder made up of bi-GRU layers for converting the input
samples to a latent space, predictor for enhancing the capability of encoder
made up of 1D-CNN layers and a decoder comprised of uni-GRU layers for
reconstructing the samples from the latent space representation. Condition
vector in latent space is used for generating molecules with the desired
properties. We present the loss functions used for training our network,
experimental details and property prediction metrics. Our network outperforms
previous methods using Molecular weight, LogP and Quantitative Estimation of
Drug-likeness as the evaluation metrics.
Related papers
- DiffMS: Diffusion Generation of Molecules Conditioned on Mass Spectra [60.39311767532607]
DiffMS is a formula-restricted encoder-decoder generative network.
We develop a robust decoder that bridges latent embeddings and molecular structures.
Experiments show DiffMS outperforms existing models on $textitde novo$ molecule generation.
arXiv Detail & Related papers (2025-02-13T18:29:48Z) - Sliding down the stairs: how correlated latent variables accelerate learning with neural networks [8.107431208836426]
We show that correlations between latent variables along directions encoded in different input cumulants speed up learning from higher-order correlations.
Our results are confirmed in simulations of two-layer neural networks.
arXiv Detail & Related papers (2024-04-12T17:01:25Z) - Generative Kaleidoscopic Networks [2.321684718906739]
We utilize this property of neural networks to design a dataset kaleidoscope, termed as Generative Kaleidoscopic Networks'
We observed this phenomenon to various degrees for the other deep learning architectures like CNNs, Transformers & U-Nets.
arXiv Detail & Related papers (2024-02-19T02:48:40Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Gradient-based Wang-Landau Algorithm: A Novel Sampler for Output
Distribution of Neural Networks over the Input Space [20.60516313062773]
In this paper, we propose a novel Gradient-based Wang-Landau (GWL) sampler.
We first draw the connection between the output distribution of a NN and the density of states (DOS) of a physical system.
Then, we renovate the classic sampler for the DOS problem, the Wang-Landau algorithm, by replacing its random proposals with gradient-based Monte Carlo proposals.
arXiv Detail & Related papers (2023-02-19T05:42:30Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Neural network enhanced measurement efficiency for molecular
groundstates [63.36515347329037]
We adapt common neural network models to learn complex groundstate wavefunctions for several molecular qubit Hamiltonians.
We find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables.
arXiv Detail & Related papers (2022-06-30T17:45:05Z) - Domain-informed neural networks for interaction localization within
astroparticle experiments [6.157382820537719]
This work proposes a domain-informed neural network architecture for experimental particle physics.
It uses particle interaction localization with the time-projection chamber (TPC) technology for dark matter research as an example application.
arXiv Detail & Related papers (2021-12-15T09:42:04Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Inductive Graph Neural Networks for Spatiotemporal Kriging [13.666589510218738]
We develop an inductive graph neural network model to recover data for unsampled sensors on a network/graph structure.
Empirical results on several real-worldtemporal datasets demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2020-06-13T01:23:44Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.