Genetic-guided GFlowNets for Sample Efficient Molecular Optimization
- URL: http://arxiv.org/abs/2402.05961v3
- Date: Tue, 29 Oct 2024 21:19:53 GMT
- Title: Genetic-guided GFlowNets for Sample Efficient Molecular Optimization
- Authors: Hyeonah Kim, Minsu Kim, Sanghyeok Choi, Jinkyoo Park,
- Abstract summary: Recent advances in deep learning-based generative methods have shown promise but face the issue of sample efficiency.
This paper proposes a novel algorithm for sample-efficient molecular optimization by distilling a powerful genetic algorithm into deep generative policy.
- Score: 33.270494123656746
- License:
- Abstract: The challenge of discovering new molecules with desired properties is crucial in domains like drug discovery and material design. Recent advances in deep learning-based generative methods have shown promise but face the issue of sample efficiency due to the computational expense of evaluating the reward function. This paper proposes a novel algorithm for sample-efficient molecular optimization by distilling a powerful genetic algorithm into deep generative policy using GFlowNets training, the off-policy method for amortized inference. This approach enables the deep generative policy to learn from domain knowledge, which has been explicitly integrated into the genetic algorithm. Our method achieves state-of-the-art performance in the official molecular optimization benchmark, significantly outperforming previous methods. It also demonstrates effectiveness in designing inhibitors against SARS-CoV-2 with substantially fewer reward calls.
Related papers
- Quantum-inspired Reinforcement Learning for Synthesizable Drug Design [20.00111975801053]
We introduce a novel approach using the reinforcement learning method with quantum-inspired simulated annealing policy neural network to navigate the vast discrete space of chemical structures intelligently.
Specifically, we employ a deterministic REINFORCE algorithm using policy neural networks to output transitional probability to guide state transitions and local search.
Our methods are evaluated with the Practical Molecular Optimization (PMO) benchmark framework with a 10K query budget.
arXiv Detail & Related papers (2024-09-13T20:43:16Z) - Optimal feature rescaling in machine learning based on neural networks [0.0]
An optimal rescaling of input features (OFR) is carried out by a Genetic Algorithm (GA)
The OFR reshapes the input space improving the conditioning of the gradient-based algorithm used for the training.
The approach has been tested on a FFNN modeling the outcome of a real industrial process.
arXiv Detail & Related papers (2024-02-13T21:57:31Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Direct Mutation and Crossover in Genetic Algorithms Applied to
Reinforcement Learning Tasks [0.9137554315375919]
This paper will focus on applying neuroevolution using a simple genetic algorithm (GA) to find the weights of a neural network that produce optimally behaving agents.
We present two novel modifications that improve the data efficiency and speed of convergence when compared to the initial implementation.
arXiv Detail & Related papers (2022-01-13T07:19:28Z) - Improving RNA Secondary Structure Design using Deep Reinforcement
Learning [69.63971634605797]
We propose a new benchmark of applying reinforcement learning to RNA sequence design, in which the objective function is defined to be the free energy in the sequence's secondary structure.
We show results of the ablation analysis that we do for these algorithms, as well as graphs indicating the algorithm's performance across batches.
arXiv Detail & Related papers (2021-11-05T02:54:06Z) - JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural
Networks for Inverse Molecular Design [1.6114012813668934]
Inverse molecular design, i.e., designing molecules with specific target properties, can be posed as an optimization problem.
Janus is a genetic algorithm inspired by parallel tempering that propagates two populations, one for exploration and another for exploitation.
Janus is augmented by a deep neural network that approximates molecular properties via active learning for enhanced sampling of the chemical space.
arXiv Detail & Related papers (2021-06-07T23:41:34Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Guiding Deep Molecular Optimization with Genetic Exploration [79.50698140997726]
We propose genetic expert-guided learning (GEGL), a framework for training a deep neural network (DNN) to generate highly-rewarding molecules.
Extensive experiments show that GEGL significantly improves over state-of-the-art methods.
arXiv Detail & Related papers (2020-07-04T05:01:26Z) - Implementation Matters in Deep Policy Gradients: A Case Study on PPO and
TRPO [90.90009491366273]
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms.
Specifically, we investigate the consequences of "code-level optimizations:"
Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function.
arXiv Detail & Related papers (2020-05-25T16:24:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.