Gradient GA: Gradient Genetic Algorithm for Drug Molecular Design
- URL: http://arxiv.org/abs/2502.09860v1
- Date: Fri, 14 Feb 2025 02:03:39 GMT
- Title: Gradient GA: Gradient Genetic Algorithm for Drug Molecular Design
- Authors: Chris Zhuang, Debadyuti Mukherjee, Yingzhou Lu, Tianfan Fu, Ruqi Zhang,
- Abstract summary: Experimental results demonstrate that our method significantly improves both convergence speed and solution quality, outperforming cutting-edge techniques.
For example, it achieves up to a 25% improvement in the top-10 score over the vanilla genetic algorithm.
- Score: 17.597915824192953
- License:
- Abstract: Molecular discovery has brought great benefits to the chemical industry. Various molecule design techniques are developed to identify molecules with desirable properties. Traditional optimization methods, such as genetic algorithms, continue to achieve state-of-the-art results across multiple molecular design benchmarks. However, these techniques rely solely on random walk exploration, which hinders both the quality of the final solution and the convergence speed. To address this limitation, we propose a novel approach called Gradient Genetic Algorithm (Gradient GA), which incorporates gradient information from the objective function into genetic algorithms. Instead of random exploration, each proposed sample iteratively progresses toward an optimal solution by following the gradient direction. We achieve this by designing a differentiable objective function parameterized by a neural network and utilizing the Discrete Langevin Proposal to enable gradient guidance in discrete molecular spaces. Experimental results demonstrate that our method significantly improves both convergence speed and solution quality, outperforming cutting-edge techniques. For example, it achieves up to a 25% improvement in the top-10 score over the vanilla genetic algorithm. The code is publicly available at https://github.com/debadyuti23/GradientGA.
Related papers
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Massive Dimensions Reduction and Hybridization with Meta-heuristics in Deep Learning [0.24578723416255746]
Histogram-based Differential Evolution (HBDE) hybridizes gradient-based and gradient-free algorithms to optimize parameters.
HBDE outperforms baseline gradient-based and parent gradient-free DE algorithms evaluated on CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2024-08-13T20:28:20Z) - Genetic-guided GFlowNets for Sample Efficient Molecular Optimization [33.270494123656746]
Recent advances in deep learning-based generative methods have shown promise but face the issue of sample efficiency.
This paper proposes a novel algorithm for sample-efficient molecular optimization by distilling a powerful genetic algorithm into deep generative policy.
arXiv Detail & Related papers (2024-02-05T04:12:40Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Improving RNA Secondary Structure Design using Deep Reinforcement
Learning [69.63971634605797]
We propose a new benchmark of applying reinforcement learning to RNA sequence design, in which the objective function is defined to be the free energy in the sequence's secondary structure.
We show results of the ablation analysis that we do for these algorithms, as well as graphs indicating the algorithm's performance across batches.
arXiv Detail & Related papers (2021-11-05T02:54:06Z) - JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural
Networks for Inverse Molecular Design [1.6114012813668934]
Inverse molecular design, i.e., designing molecules with specific target properties, can be posed as an optimization problem.
Janus is a genetic algorithm inspired by parallel tempering that propagates two populations, one for exploration and another for exploitation.
Janus is augmented by a deep neural network that approximates molecular properties via active learning for enhanced sampling of the chemical space.
arXiv Detail & Related papers (2021-06-07T23:41:34Z) - Guiding Deep Molecular Optimization with Genetic Exploration [79.50698140997726]
We propose genetic expert-guided learning (GEGL), a framework for training a deep neural network (DNN) to generate highly-rewarding molecules.
Extensive experiments show that GEGL significantly improves over state-of-the-art methods.
arXiv Detail & Related papers (2020-07-04T05:01:26Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.