Utilizing Reinforcement Learning for de novo Drug Design
- URL: http://arxiv.org/abs/2303.17615v2
- Date: Tue, 30 Jan 2024 21:09:48 GMT
- Title: Utilizing Reinforcement Learning for de novo Drug Design
- Authors: Hampus Gummesson Svensson, Christian Tyrchan, Ola Engkvist, Morteza
Haghir Chehreghani
- Abstract summary: We develop a unified framework for using reinforcement learning for de novo drug design.
We study various on- and off-policy reinforcement learning algorithms and replay buffers to learn an RNN-based policy.
Our findings suggest that it is advantageous to use at least both top-scoring and low-scoring molecules for updating the policy.
- Score: 2.5740778707024305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based approaches for generating novel drug molecules with
specific properties have gained a lot of interest in the last few years. Recent
studies have demonstrated promising performance for string-based generation of
novel molecules utilizing reinforcement learning. In this paper, we develop a
unified framework for using reinforcement learning for de novo drug design,
wherein we systematically study various on- and off-policy reinforcement
learning algorithms and replay buffers to learn an RNN-based policy to generate
novel molecules predicted to be active against the dopamine receptor DRD2. Our
findings suggest that it is advantageous to use at least both top-scoring and
low-scoring molecules for updating the policy when structural diversity is
essential. Using all generated molecules at an iteration seems to enhance
performance stability for on-policy algorithms. In addition, when replaying
high, intermediate, and low-scoring molecules, off-policy algorithms display
the potential of improving the structural diversity and number of active
molecules generated, but possibly at the cost of a longer exploration phase.
Our work provides an open-source framework enabling researchers to investigate
various reinforcement learning methods for de novo drug design.
Related papers
- Many-Shot In-Context Learning for Molecular Inverse Design [56.65345962071059]
Large Language Models (LLMs) have demonstrated great performance in few-shot In-Context Learning (ICL)
We develop a new semi-supervised learning method that overcomes the lack of experimental data available for many-shot ICL.
As we show, the new method greatly improves upon existing ICL methods for molecular design while being accessible and easy to use for scientists.
arXiv Detail & Related papers (2024-07-26T21:10:50Z) - Latent Chemical Space Searching for Plug-in Multi-objective Molecule Generation [9.442146563809953]
We develop a versatile 'plug-in' molecular generation model that incorporates objectives related to target affinity, drug-likeness, and synthesizability.
We identify PSO-ENP as the optimal variant for multi-objective molecular generation and optimization.
arXiv Detail & Related papers (2024-04-10T02:37:24Z) - Mol-AIR: Molecular Reinforcement Learning with Adaptive Intrinsic Rewards for Goal-directed Molecular Generation [0.0]
Mol-AIR is a reinforcement learning-based framework using adaptive intrinsic rewards for goal-directed molecular generation.
In benchmark tests, Mol-AIR demonstrates superior performance over existing approaches in generating molecules with desired properties.
arXiv Detail & Related papers (2024-03-29T10:44:51Z) - Hybrid quantum cycle generative adversarial network for small molecule
generation [0.0]
This work introduces several new generative adversarial network models based on engineering integration of parametrized quantum circuits into known molecular generative adversarial networks.
The introduced machine learning models incorporate a new multi- parameter reward function grounded in reinforcement learning principles.
arXiv Detail & Related papers (2023-12-28T14:10:26Z) - Drug Synergistic Combinations Predictions via Large-Scale Pre-Training
and Graph Structure Learning [82.93806087715507]
Drug combination therapy is a well-established strategy for disease treatment with better effectiveness and less safety degradation.
Deep learning models have emerged as an efficient way to discover synergistic combinations.
Our framework achieves state-of-the-art results in comparison with other deep learning-based methods.
arXiv Detail & Related papers (2023-01-14T15:07:43Z) - Faster and more diverse de novo molecular optimization with double-loop
reinforcement learning using augmented SMILES [0.0]
We propose to use double-loop reinforcement learning with simplified molecular line entry system (SMILES) augmentation to use scoring calculations more efficiently.
We find that augmentation repeats between 5-10x seem safe for most scoring functions and additionally increase the diversity of the generated compounds.
arXiv Detail & Related papers (2022-10-22T14:36:38Z) - Retrieval-based Controllable Molecule Generation [63.44583084888342]
We propose a new retrieval-based framework for controllable molecule generation.
We use a small set of molecules to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria.
Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning.
arXiv Detail & Related papers (2022-08-23T17:01:16Z) - Improving RNA Secondary Structure Design using Deep Reinforcement
Learning [69.63971634605797]
We propose a new benchmark of applying reinforcement learning to RNA sequence design, in which the objective function is defined to be the free energy in the sequence's secondary structure.
We show results of the ablation analysis that we do for these algorithms, as well as graphs indicating the algorithm's performance across batches.
arXiv Detail & Related papers (2021-11-05T02:54:06Z) - MIMOSA: Multi-constraint Molecule Sampling for Molecule Optimization [51.00815310242277]
generative models and reinforcement learning approaches made initial success, but still face difficulties in simultaneously optimizing multiple drug properties.
We propose the MultI-constraint MOlecule SAmpling (MIMOSA) approach, a sampling framework to use input molecule as an initial guess and sample molecules from the target distribution.
arXiv Detail & Related papers (2020-10-05T20:18:42Z) - Learning To Navigate The Synthetically Accessible Chemical Space Using
Reinforcement Learning [75.95376096628135]
We propose a novel forward synthesis framework powered by reinforcement learning (RL) for de novo drug design.
In this setup, the agent learns to navigate through the immense synthetically accessible chemical space.
We describe how the end-to-end training in this study represents an important paradigm in radically expanding the synthesizable chemical space.
arXiv Detail & Related papers (2020-04-26T21:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.