Parameter Optimization using high-dimensional Bayesian Optimization
- URL: http://arxiv.org/abs/2010.03955v1
- Date: Mon, 5 Oct 2020 13:13:28 GMT
- Title: Parameter Optimization using high-dimensional Bayesian Optimization
- Authors: David Yenicelik
- Abstract summary: We focus on solutions to practical problems, such as tuning the parameters for an electron accelerator, or for even simpler tasks that can be run and optimized just in time with a standard laptop at hand.
Our main contributions are 1.) comparing how the log-likelihood affects the angle-difference in the real projection matrix, and the found matrix matrix, 2.
A short analysis on how dimensionality reduction techniques can be used for feature selection, and 4.) a novel algorithm called "BORING", which allows for a simple fallback mechanism if the matrix identification fails.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this thesis, I explore the possibilities of conducting Bayesian
optimization techniques in high dimensional domains. Although high dimensional
domains can be defined to be between hundreds and thousands of dimensions, we
will primarily focus on problem settings that occur between two and 20
dimensions. As such, we focus on solutions to practical problems, such as
tuning the parameters for an electron accelerator, or for even simpler tasks
that can be run and optimized just in time with a standard laptop at hand. Our
main contributions are 1.) comparing how the log-likelihood affects the
angle-difference in the real projection matrix, and the found matrix matrix,
2.) an extensive analysis of current popular methods including strengths and
shortcomings, 3.) a short analysis on how dimensionality reduction techniques
can be used for feature selection, and 4.) a novel algorithm called "BORING",
which allows for a simple fallback mechanism if the matrix identification
fails, as well as taking into consideration "passive" subspaces which provide
small perturbations of the function at hand. The main features of BORING are
1.) the possibility to identify the subspace (unlike most other optimization
algorithms), and 2.) to provide a much lower penalty to identify the subspace
if identification fails, as optimization is still the primary goal.
Related papers
- High dimensional Bayesian Optimization via Condensing-Expansion Projection [1.6355174910200032]
In high-dimensional settings, Bayesian optimization (BO) can be expensive and infeasible.
We introduce a novel random projection-based approach for high-dimensional BO that does not reply on the effective subspace assumption.
Experimental results demonstrate that both algorithms outperform existing random embedding-based algorithms in most cases.
arXiv Detail & Related papers (2024-08-09T04:47:38Z) - l1-norm regularized l1-norm best-fit lines [3.0963566281269594]
We present a novel fitting procedure, utilizing simple ratios and sorting techniques.
The proposed algorithm demonstrates a worst-case time complexity of $O$(n2 m log n)$ and, in certain instances, achieves global optimality for the sparse subspace.
arXiv Detail & Related papers (2024-02-26T16:30:58Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Linear Embedding-based High-dimensional Batch Bayesian Optimization
without Reconstruction Mappings [21.391136086094225]
We show that our method is applicable to batch optimization problems with thousands of dimensions without any computational difficulty.
We demonstrate the effectiveness of our method on high-dimensional benchmarks and a real-world function.
arXiv Detail & Related papers (2022-11-02T08:11:10Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Projection & Probability-Driven Black-Box Attack [205.9923346080908]
Existing black-box attacks suffer from the need for excessive queries in the high-dimensional space.
We propose Projection & Probability-driven Black-box Attack (PPBA) to tackle this problem.
Our method requires at most 24% fewer queries with a higher attack success rate compared with state-of-the-art approaches.
arXiv Detail & Related papers (2020-05-08T03:37:50Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection [1.1240669509034296]
We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
arXiv Detail & Related papers (2020-01-21T09:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.