High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning
- URL: http://arxiv.org/abs/2106.03609v1
- Date: Mon, 7 Jun 2021 13:35:47 GMT
- Title: High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning
- Authors: Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys
Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang
Chen, Jun Wang, Jan Peters, Haitham Bou-Ammar
- Abstract summary: We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
- Score: 119.91679702854499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method based on deep metric learning to perform Bayesian
optimisation over high-dimensional, structured input spaces using variational
autoencoders (VAEs). By extending ideas from supervised deep metric learning,
we address a longstanding problem in high-dimensional VAE Bayesian
optimisation, namely how to enforce a discriminative latent space as an
inductive bias. Importantly, we achieve such an inductive bias using just 1% of
the available labelled data relative to previous work, highlighting the sample
efficiency of our approach. As a theoretical contribution, we present a proof
of vanishing regret for our method. As an empirical contribution, we present
state-of-the-art results on real-world high-dimensional black-box optimisation
problems including property-guided molecule generation. It is the hope that the
results presented in this paper can act as a guiding principle for realising
effective high-dimensional Bayesian optimisation.
Related papers
- Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance [31.585328335396607]
Current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space.
We propose a novel method to effectively utilize unlabeled data with the guidance of labeled data.
Our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios.
arXiv Detail & Related papers (2023-12-28T11:57:58Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Incremental Semi-Supervised Learning Through Optimal Transport [0.0]
We propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph.
The proposed approach uses the regularized optimal transport between empirical measures defined on labelled and unlabelled data points in order to obtain an affinity matrix from the optimal transport plan.
arXiv Detail & Related papers (2021-03-22T15:31:53Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - High-Dimensional Bayesian Optimization via Nested Riemannian Manifolds [0.0]
We propose to exploit the geometry of non-Euclidean search spaces, which often arise in a variety of domains, to learn structure-preserving mappings.
Our approach features geometry-aware Gaussian processes that jointly learn a nested-manifold embedding and a representation of the objective function in the latent space.
arXiv Detail & Related papers (2020-10-21T11:24:11Z) - Adaptive Learning of the Optimal Batch Size of SGD [52.50880550357175]
We propose a method capable of learning the optimal batch size adaptively throughout its iterations for strongly convex and smooth functions.
Our method does this provably, and in our experiments with synthetic and real data robustly exhibits nearly optimal behaviour.
We generalize our method to several new batch strategies not considered in the literature before, including a sampling suitable for distributed implementations.
arXiv Detail & Related papers (2020-05-03T14:28:32Z) - Re-Examining Linear Embeddings for High-Dimensional Bayesian
Optimization [20.511115436145467]
We identify several crucial issues and misconceptions about the use of linear embeddings for BO.
We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO.
arXiv Detail & Related papers (2020-01-31T05:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.