PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance
- URL: http://arxiv.org/abs/2312.16983v1
- Date: Thu, 28 Dec 2023 11:57:58 GMT
- Title: PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance
- Authors: Taicai Chen, Yue Duan, Dong Li, Lei Qi, Yinghuan Shi, Yang Gao
- Abstract summary: Current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space.
We propose a novel method to effectively utilize unlabeled data with the guidance of labeled data.
Our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios.
- Score: 31.585328335396607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Autoencoder based Bayesian Optimization (VAE-BO) has demonstrated
its excellent performance in addressing high-dimensional structured
optimization problems. However, current mainstream methods overlook the
potential of utilizing a pool of unlabeled data to construct the latent space,
while only concentrating on designing sophisticated models to leverage the
labeled data. Despite their effective usage of labeled data, these methods
often require extra network structures, additional procedure, resulting in
computational inefficiency. To address this issue, we propose a novel method to
effectively utilize unlabeled data with the guidance of labeled data.
Specifically, we tailor the pseudo-labeling technique from semi-supervised
learning to explicitly reveal the relative magnitudes of optimization objective
values hidden within the unlabeled data. Based on this technique, we assign
appropriate training weights to unlabeled data to enhance the construction of a
discriminative latent space. Furthermore, we treat the VAE encoder and the
Gaussian Process (GP) in Bayesian optimization as a unified deep kernel
learning process, allowing the direct utilization of labeled data, which we
term as Gaussian Process guidance. This directly and effectively integrates the
goal of improving GP accuracy into the VAE training, thereby guiding the
construction of the latent space. The extensive experiments demonstrate that
our proposed method outperforms existing VAE-BO algorithms in various
optimization scenarios. Our code will be published at
https://github.com/TaicaiChen/PG-LBO.
Related papers
- Approximation-Aware Bayesian Optimization [34.56666383247348]
High-dimensional Bayesian optimization (BO) tasks often require 10,000 function evaluations before obtaining meaningful results.
We modify sparse variational Gaussian processes (SVGPs) to better align with the goals of BO.
Using the framework of utility-calibrated variational inference, we unify GP approximation and data acquisition into a joint optimization problem.
arXiv Detail & Related papers (2024-06-06T17:55:02Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - CARE: Confidence-rich Autonomous Robot Exploration using Bayesian Kernel
Inference and Optimization [12.32946442160165]
We consider improving the efficiency of information-based autonomous robot exploration in unknown and complex environments.
We propose a novel lightweight information gain inference method based on Bayesian kernel inference and optimization (BKIO)
We show the desired efficiency of our proposed methods without losing exploration performance in different unstructured, cluttered environments.
arXiv Detail & Related papers (2023-09-11T02:30:06Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Prior-mean-assisted Bayesian optimization application on FRIB Front-End
tunning [61.78406085010957]
We exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
In this paper, we exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
arXiv Detail & Related papers (2022-11-11T18:34:15Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Incremental Semi-Supervised Learning Through Optimal Transport [0.0]
We propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph.
The proposed approach uses the regularized optimal transport between empirical measures defined on labelled and unlabelled data points in order to obtain an affinity matrix from the optimal transport plan.
arXiv Detail & Related papers (2021-03-22T15:31:53Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.