Combined Global and Local Search for Optimization with Gaussian Process
Models
- URL: http://arxiv.org/abs/2107.03217v1
- Date: Wed, 7 Jul 2021 13:40:37 GMT
- Title: Combined Global and Local Search for Optimization with Gaussian Process
Models
- Authors: Qun Meng, Songhao Wang, Szu Hui Ng
- Abstract summary: We introduce the Additive Global and Local GP (AGLGP) model in the optimization framework.
AGLGP is rooted in the inducing-points-based GP sparse approximations and is combined with independent local models in different regions.
It first divides the whole design space into disjoint local regions and identifies a promising region with the global model.
Next, a local model in the selected region is fit to guide detailed search within this region.
The algorithm then switches back to the global step when a good local solution is found.
- Score: 1.1602089225841632
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gaussian process (GP) model based optimization is widely applied in
simulation and machine learning. In general, it first estimates a GP model
based on a few observations from the true response and then employs this model
to guide the search, aiming to quickly locate the global optimum. Despite its
successful applications, it has several limitations that may hinder its broader
usage. First, building an accurate GP model can be difficult and
computationally expensive, especially when the response function is multi-modal
or varies significantly over the design space. Second, even with an appropriate
model, the search process can be trapped in suboptimal regions before moving to
the global optimum due to the excessive effort spent around the current best
solution. In this work, we adopt the Additive Global and Local GP (AGLGP) model
in the optimization framework. The model is rooted in the inducing-points-based
GP sparse approximations and is combined with independent local models in
different regions. With these properties, the AGLGP model is suitable for
multi-modal responses with relatively large data sizes. Based on this AGLGP
model, we propose a Combined Global and Local search for Optimization (CGLO)
algorithm. It first divides the whole design space into disjoint local regions
and identifies a promising region with the global model. Next, a local model in
the selected region is fit to guide detailed search within this region. The
algorithm then switches back to the global step when a good local solution is
found. The global and local natures of CGLO enable it to enjoy the benefits of
both global and local search to efficiently locate the global optimum.
Related papers
- Gaussian Process Thompson Sampling via Rootfinding [2.94944680995069]
Thompson sampling (TS) is a simple, effective policy in Bayesian decision making.
In continuous optimization, the posterior of the objective function is often a Gaussian process (GP), whose sample paths have numerous local optima.
We introduce an efficient global optimization strategy for GP-TS that carefully selects starting points for gradient-based multi-starts.
arXiv Detail & Related papers (2024-10-10T16:06:45Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - A Global-Local Approximation Framework for Large-Scale Gaussian Process
Modeling [0.0]
We propose a novel framework for large-scale Gaussian process (GP) modeling.
We employ a combined global-local approach in building the approximation.
The performance of our framework, which we refer to as TwinGP, is on par or better than the state-of-the-art GP modeling methods.
arXiv Detail & Related papers (2023-05-17T12:19:59Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Global Aggregation then Local Distribution for Scene Parsing [99.1095068574454]
We show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks.
Our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff.
arXiv Detail & Related papers (2021-07-28T03:46:57Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z) - Principles and Algorithms for Forecasting Groups of Time Series:
Locality and Globality [0.5076419064097732]
We formalize the setting of forecasting a set of time series with local and global methods.
Global models can succeed in a wider range of problems than previously thought.
purposely naive algorithms derived from these principles result in superior accuracy.
arXiv Detail & Related papers (2020-08-02T10:22:05Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.