Neighbor Regularized Bayesian Optimization for Hyperparameter
Optimization
- URL: http://arxiv.org/abs/2210.03481v1
- Date: Fri, 7 Oct 2022 12:08:01 GMT
- Title: Neighbor Regularized Bayesian Optimization for Hyperparameter
Optimization
- Authors: Lei Cui, Yangguang Li, Xin Lu, Dong An, Fenggang Liu
- Abstract summary: We propose a novel BO algorithm called Neighbor Regularized Bayesian Optimization (NRBO) to solve the problem.
We first propose a neighbor-based regularization to smooth each sample observation, which could reduce the observation noise efficiently without any extra training cost.
We conduct experiments on the bayesmark benchmark and important computer vision benchmarks such as ImageNet and COCO.
- Score: 12.544312247050236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian Optimization (BO) is a common solution to search optimal
hyperparameters based on sample observations of a machine learning model.
Existing BO algorithms could converge slowly even collapse when the potential
observation noise misdirects the optimization. In this paper, we propose a
novel BO algorithm called Neighbor Regularized Bayesian Optimization (NRBO) to
solve the problem. We first propose a neighbor-based regularization to smooth
each sample observation, which could reduce the observation noise efficiently
without any extra training cost. Since the neighbor regularization highly
depends on the sample density of a neighbor area, we further design a
density-based acquisition function to adjust the acquisition reward and obtain
more stable statistics. In addition, we design a adjustment mechanism to ensure
the framework maintains a reasonable regularization strength and density reward
conditioned on remaining computation resources. We conduct experiments on the
bayesmark benchmark and important computer vision benchmarks such as ImageNet
and COCO. Extensive experiments demonstrate the effectiveness of NRBO and it
consistently outperforms other state-of-the-art methods.
Related papers
- Bayesian Optimization for Hyperparameters Tuning in Neural Networks [0.0]
Bayesian Optimization is a derivative-free global optimization method suitable for black-box functions with continuous inputs and limited evaluation budgets.
This study investigates the application of BO for the hyper parameter tuning of neural networks, specifically targeting the enhancement of Convolutional Neural Networks (CNN)
Experimental outcomes reveal that BO effectively balances exploration and exploitation, converging rapidly towards optimal settings for CNN architectures.
This approach underlines the potential of BO in automating neural network tuning, contributing to improved accuracy and computational efficiency in machine learning pipelines.
arXiv Detail & Related papers (2024-10-29T09:23:24Z) - Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Sparse Bayesian Optimization [16.867375370457438]
We present several regularization-based approaches that allow us to discover sparse and more interpretable configurations.
We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity.
We show that we are able to efficiently optimize for sparsity.
arXiv Detail & Related papers (2022-03-03T18:25:33Z) - BOSH: Bayesian Optimization by Sampling Hierarchically [10.10241176664951]
We propose a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations.
We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper- parameter tuning tasks.
arXiv Detail & Related papers (2020-07-02T07:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.