Messenger RNA Design via Expected Partition Function and Continuous
Optimization
- URL: http://arxiv.org/abs/2401.00037v2
- Date: Fri, 1 Mar 2024 18:01:10 GMT
- Title: Messenger RNA Design via Expected Partition Function and Continuous
Optimization
- Authors: Ning Dai, Wei Yu Tang, Tianshuo Zhou, David H. Mathews, Liang Huang
- Abstract summary: We develop a general framework for continuous optimization based on a generalization of classical partition function.
We consider the important problem of mRNA design with wide applications in vaccines and therapeutics.
- Score: 4.53482492156538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tasks of designing RNAs are discrete optimization problems, and several
versions of these problems are NP-hard. As an alternative to commonly used
local search methods, we formulate these problems as continuous optimization
and develop a general framework for this optimization based on a generalization
of classical partition function which we call "expected partition function".
The basic idea is to start with a distribution over all possible candidate
sequences, and extend the objective function from a sequence to a distribution.
We then use gradient descent-based optimization methods to improve the extended
objective function, and the distribution will gradually shrink towards a
one-hot sequence (i.e., a single sequence). As a case study, we consider the
important problem of mRNA design with wide applications in vaccines and
therapeutics. While the recent work of LinearDesign can efficiently optimize
mRNAs for minimum free energy (MFE), optimizing for ensemble free energy is
much harder and likely intractable. Our approach can consistently improve over
the LinearDesign solution in terms of ensemble free energy, with bigger
improvements on longer sequences.
Related papers
- A Continuous Relaxation for Discrete Bayesian Optimization [17.312618575552]
We show that inference and optimization can be computationally tractable.
We consider in particular the optimization domain where very few observations and strict budgets exist.
We show that the resulting acquisition function can be optimized with both continuous or discrete optimization algorithms.
arXiv Detail & Related papers (2024-04-26T14:47:40Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - ProGO: Probabilistic Global Optimizer [9.772380490791635]
In this paper we develop an algorithm that converges to the global optima under some mild conditions.
We show that the proposed algorithm outperforms, by order of magnitude, many existing state-of-the-art methods.
arXiv Detail & Related papers (2023-10-04T22:23:40Z) - Enhancing Hyper-To-Real Space Projections Through Euclidean Norm
Meta-Heuristic Optimization [0.39146761527401425]
We show that meta-heuristic optimization can provide robust approximate solutions to different kinds of problems with a small computational burden.
Previous works addressed this issue by employing a hypercomplex representation of the search space, like quaternions, where the landscape becomes smoother and supposedly easier to optimize.
We have found that after the optimization procedure has finished, it is usually possible to obtain even better solutions by employing the Minkowski $p$-norm instead.
arXiv Detail & Related papers (2023-01-31T14:40:49Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Designing Biological Sequences via Meta-Reinforcement Learning and
Bayesian Optimization [68.28697120944116]
We train an autoregressive generative model via Meta-Reinforcement Learning to propose promising sequences for selection.
We pose this problem as that of finding an optimal policy over a distribution of MDPs induced by sampling subsets of the data.
Our in-silico experiments show that meta-learning over such ensembles provides robustness against reward misspecification and achieves competitive results.
arXiv Detail & Related papers (2022-09-13T18:37:27Z) - Bayesian Variational Optimization for Combinatorial Spaces [0.0]
Broad applications include the study of molecules, proteins, DNA, device structures and quantum circuit designs.
A on optimization over categorical spaces is needed to find optimal or pareto-optimal solutions.
We introduce a variational Bayesian optimization method that combines variational optimization and continuous relaxations.
arXiv Detail & Related papers (2020-11-03T20:56:13Z) - Obtaining Adjustable Regularization for Free via Iterate Averaging [43.75491612671571]
Regularization for optimization is a crucial technique to avoid overfitting in machine learning.
We establish an averaging scheme that converts the iterates of SGD on an arbitrary strongly convex and smooth objective function to its regularized counterpart.
Our approaches can be used for accelerated and preconditioned optimization methods as well.
arXiv Detail & Related papers (2020-08-15T15:28:05Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.