Self-focusing virtual screening with active design space pruning
- URL: http://arxiv.org/abs/2205.01753v1
- Date: Tue, 3 May 2022 19:47:30 GMT
- Title: Self-focusing virtual screening with active design space pruning
- Authors: David E. Graff, Matteo Aldeghi, Joseph A. Morrone, Kirk E. Jordan,
Edward O. Pyzer-Knapp and Connor W. Coley
- Abstract summary: We propose an extension to the framework of model-guided optimization that mitigates inferences costs using a technique we refer to as design space pruning (DSP)
We study the application of DSP to a variety of optimization tasks and observe significant reductions in overhead costs while exhibiting similar performance to the baseline optimization.
- Score: 2.853824009425466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-throughput virtual screening is an indispensable technique utilized in
the discovery of small molecules. In cases where the library of molecules is
exceedingly large, the cost of an exhaustive virtual screen may be prohibitive.
Model-guided optimization has been employed to lower these costs through
dramatic increases in sample efficiency compared to random selection. However,
these techniques introduce new costs to the workflow through the surrogate
model training and inference steps. In this study, we propose an extension to
the framework of model-guided optimization that mitigates inferences costs
using a technique we refer to as design space pruning (DSP), which irreversibly
removes poor-performing candidates from consideration. We study the application
of DSP to a variety of optimization tasks and observe significant reductions in
overhead costs while exhibiting similar performance to the baseline
optimization. DSP represents an attractive extension of model-guided
optimization that can limit overhead costs in optimization settings where these
costs are non-negligible relative to objective costs, such as docking.
Related papers
- DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction [57.83978915843095]
This paper introduces DiSK, a novel framework designed to significantly enhance the performance of differentially private gradients.
To ensure practicality for large-scale training, we simplify the Kalman filtering process, minimizing its memory and computational demands.
arXiv Detail & Related papers (2024-10-04T19:30:39Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Evolve Cost-aware Acquisition Functions Using Large Language Models [11.209139558885035]
This paper introduces EvolCAF, a novel framework that integrates large language models (LLMs) with evolutionary computation (EC) to automatically design cost-aware AFs.
The designed cost-aware AF maximizes the utilization of available information from historical data, surrogate models and budget details.
In comparison to the well-known EIpu and EI-cool methods designed by human experts, our approach showcases remarkable efficiency and generalization across various tasks.
arXiv Detail & Related papers (2024-04-25T12:19:18Z) - Landscape-Sketch-Step: An AI/ML-Based Metaheuristic for Surrogate
Optimization Problems [0.0]
We introduce a newimats for global optimization in scenarios where extensive evaluations of the cost function are expensive, inaccessible, or even prohibitive.
The method, which we call Landscape-Sketch-and-Step (LSS), combines Machine Learning, Replica Optimization, and Reinforcement Learning techniques.
arXiv Detail & Related papers (2023-09-14T01:53:45Z) - Robustified Learning for Online Optimization with Memory Costs [28.737193318136725]
We propose a novel expert-robustified learning (ERL) approach, achieving both good average performance and robustness.
For any $lambdageq1$, ERL can achieve $lambda$-competitive against the expert algorithm and $lambdacdot C$-competitive against the optimal offline algorithm.
arXiv Detail & Related papers (2023-05-01T06:12:01Z) - Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor [68.17855675511602]
We investigate an online prediction strategy named as Discounted-Normal-Predictor (Kapralov and Panigrahy, 2010) for smoothed online convex optimization (SOCO)
We show that the proposed algorithm can minimize the adaptive regret with switching cost in every interval.
arXiv Detail & Related papers (2022-05-02T08:48:22Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs [28.254408148839644]
We propose a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous evaluation costs.
Our acquisition function outperforms existing methods in a variety of synthetic and real problems.
arXiv Detail & Related papers (2021-11-12T02:18:26Z) - Few-shot Quality-Diversity Optimization [50.337225556491774]
Quality-Diversity (QD) optimization has been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning.
We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation.
Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
arXiv Detail & Related papers (2021-09-14T17:12:20Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z) - FlexiBO: A Decoupled Cost-Aware Multi-Objective Optimization Approach
for Deep Neural Networks [4.596221278839825]
We develop a novel multi-objective optimization algorithm, we call Flexible Multi-Objective Bayesian Optimization (FlexiBO) to address this issue.
We evaluate FlexiBO on seven state-of-the-art DNNs for image recognition, natural language processing (NLP), and speech-to-text translation.
arXiv Detail & Related papers (2020-01-18T03:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.