SCORE: A 1D Reparameterization Technique to Break Bayesian Optimization's Curse of Dimensionality
- URL: http://arxiv.org/abs/2406.12661v1
- Date: Tue, 18 Jun 2024 14:28:29 GMT
- Title: SCORE: A 1D Reparameterization Technique to Break Bayesian Optimization's Curse of Dimensionality
- Authors: Joseph Chakar,
- Abstract summary: A 1D reparametrization trick is proposed to break this curse and sustain linear time complexity for BO in high-dimensional landscapes.
This fast and scalable approach named SCORE can successfully find the global minimum of needle-in-a-haystack optimization functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) has emerged as a powerful tool for navigating complex search spaces, showcasing practical applications in the fields of science and engineering.However, since it typically relies on a surrogate model to approximate the objective function, BO grapples with heightened computational costs that tend to escalate as the number of parameters and experiments grows. Several methods such as parallelization, surrogate model approximations, and memory pruning have been proposed to cut down computing time, but they all fall short of resolving the core issue behind BO's curse of dimensionality. In this paper, a 1D reparametrization trick is proposed to break this curse and sustain linear time complexity for BO in high-dimensional landscapes. This fast and scalable approach named SCORE can successfully find the global minimum of needle-in-a-haystack optimization functions and fit real-world data without the high-performance computing resources typically required by state-of-the-art techniques.
Related papers
- Decreasing the Computing Time of Bayesian Optimization using
Generalizable Memory Pruning [56.334116591082896]
We show a wrapper of memory pruning and bounded optimization capable of being used with any surrogate model and acquisition function.
Running BO on high-dimensional or massive data sets becomes intractable due to this time complexity.
All model implementations are run on the MIT Supercloud state-of-the-art computing hardware.
arXiv Detail & Related papers (2023-09-08T14:05:56Z) - Non-Convex Bilevel Optimization with Time-Varying Objective Functions [57.299128109226025]
We propose an online bilevel optimization where the functions can be time-varying and the agent continuously updates the decisions with online data.
Compared to existing algorithms, SOBOW is computationally efficient and does not need to know previous functions.
We show that SOBOW can achieve a sublinear bilevel local regret under mild conditions.
arXiv Detail & Related papers (2023-08-07T06:27:57Z) - Neuromorphic Bayesian Optimization in Lava [0.0]
We introduce Lava Bayesian Optimization (LavaBO) as a contribution to the open-source Lava Software Framework.
LavaBO is the first step towards developing a BO system compatible with heterogeneous, fine-grained parallel, in-memory neuromorphic computing architectures.
We evaluate the algorithmic performance of the LavaBO system on multiple problems such as training state-of-the-art spiking neural network through back-propagation and evolutionary learning.
arXiv Detail & Related papers (2023-05-18T15:54:23Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Fast Bayesian Optimization of Needle-in-a-Haystack Problems using
Zooming Memory-Based Initialization [73.96101108943986]
A Needle-in-a-Haystack problem arises when there is an extreme imbalance of optimum conditions relative to the size of the dataset.
We present a Zooming Memory-Based Initialization algorithm that builds on conventional Bayesian optimization principles.
arXiv Detail & Related papers (2022-08-26T23:57:41Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Fighting the curse of dimensionality: A machine learning approach to
finding global optima [77.34726150561087]
This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
arXiv Detail & Related papers (2021-10-28T09:50:29Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - High Dimensional Bayesian Optimization Assisted by Principal Component
Analysis [4.030481609048958]
We introduce a novel PCA-assisted BO (PCA-BO) algorithm for high-dimensional numerical optimization problems.
We show that PCA-BO can effectively reduce the CPU time incurred on high-dimensional problems, and maintains the convergence rate on problems with an adequate global structure.
arXiv Detail & Related papers (2020-07-02T07:03:27Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.