Deep Inverse Reinforcement Learning for Structural Evolution of Small
Molecules
- URL: http://arxiv.org/abs/2008.11804v2
- Date: Thu, 1 Oct 2020 11:20:01 GMT
- Title: Deep Inverse Reinforcement Learning for Structural Evolution of Small
Molecules
- Authors: Brighter Agyemang, Wei-Ping Wu, Daniel Addo, Michael Y. Kpiebaareh,
Ebenezer Nanor, Charles Roland Haruna
- Abstract summary: reinforcement learning has been mostly exploited in the literature for generating novel compounds.
The requirement of designing a reward function that succinctly represents the learning objective could prove daunting in certain complex domains.
We propose a framework for a compound generator and learning a transferable reward function based on the entropy inverse reinforcement learning paradigm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The size and quality of chemical libraries to the drug discovery pipeline are
crucial for developing new drugs or repurposing existing drugs. Existing
techniques such as combinatorial organic synthesis and High-Throughput
Screening usually make the process extraordinarily tough and complicated since
the search space of synthetically feasible drugs is exorbitantly huge. While
reinforcement learning has been mostly exploited in the literature for
generating novel compounds, the requirement of designing a reward function that
succinctly represents the learning objective could prove daunting in certain
complex domains. Generative Adversarial Network-based methods also mostly
discard the discriminator after training and could be hard to train. In this
study, we propose a framework for training a compound generator and learning a
transferable reward function based on the entropy maximization inverse
reinforcement learning paradigm. We show from our experiments that the inverse
reinforcement learning route offers a rational alternative for generating
chemical compounds in domains where reward function engineering may be less
appealing or impossible while data exhibiting the desired objective is readily
available.
Related papers
- GraphXForm: Graph transformer for computer-aided molecular design with application to extraction [73.1842164721868]
We present GraphXForm, a decoder-only graph transformer architecture, which is pretrained on existing compounds and then fine-tuned.
We evaluate it on two solvent design tasks for liquid-liquid extraction, showing that it outperforms four state-of-the-art molecular design techniques.
arXiv Detail & Related papers (2024-11-03T19:45:15Z) - Utilizing Reinforcement Learning for de novo Drug Design [2.5740778707024305]
We develop a unified framework for using reinforcement learning for de novo drug design.
We study various on- and off-policy reinforcement learning algorithms and replay buffers to learn an RNN-based policy.
Our findings suggest that it is advantageous to use at least both top-scoring and low-scoring molecules for updating the policy.
arXiv Detail & Related papers (2023-03-30T07:40:50Z) - ChemVise: Maximizing Out-of-Distribution Chemical Detection with the
Novel Application of Zero-Shot Learning [60.02503434201552]
This research proposes learning approximations of complex exposures from training sets of simple ones.
We demonstrate this approach to synthetic sensor responses surprisingly improves the detection of out-of-distribution obscured chemical analytes.
arXiv Detail & Related papers (2023-02-09T20:19:57Z) - Retrieval-based Controllable Molecule Generation [63.44583084888342]
We propose a new retrieval-based framework for controllable molecule generation.
We use a small set of molecules to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria.
Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning.
arXiv Detail & Related papers (2022-08-23T17:01:16Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - Exploring Chemical Space with Score-based Out-of-distribution Generation [57.15855198512551]
We propose a score-based diffusion scheme that incorporates out-of-distribution control in the generative differential equation (SDE)
Since some novel molecules may not meet the basic requirements of real-world drugs, MOOD performs conditional generation by utilizing the gradients from a property predictor.
We experimentally validate that MOOD is able to explore the chemical space beyond the training distribution, generating molecules that outscore ones found with existing methods, and even the top 0.01% of the original training pool.
arXiv Detail & Related papers (2022-06-06T06:17:11Z) - ChemoVerse: Manifold traversal of latent spaces for novel molecule
discovery [0.7742297876120561]
It is essential to identify molecular structures with the desired chemical properties.
Recent advances in generative models using neural networks and machine learning are being widely used to design virtual libraries of drug-like compounds.
arXiv Detail & Related papers (2020-09-29T12:11:40Z) - Generative chemistry: drug discovery with deep learning generative
models [0.0]
This paper reviews the latest advances in generative chemistry which relies on generative modeling to expedite the drug discovery process.
The detailed discussions on utilizing cutting-edge generative architectures, including recurrent neural network, variational autoencoder, adversarial autoencoder, and generative adversarial network for compound generation are focused.
arXiv Detail & Related papers (2020-08-20T14:38:21Z) - Learning To Navigate The Synthetically Accessible Chemical Space Using
Reinforcement Learning [75.95376096628135]
We propose a novel forward synthesis framework powered by reinforcement learning (RL) for de novo drug design.
In this setup, the agent learns to navigate through the immense synthetically accessible chemical space.
We describe how the end-to-end training in this study represents an important paradigm in radically expanding the synthesizable chemical space.
arXiv Detail & Related papers (2020-04-26T21:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.