Bayesian task embedding for few-shot Bayesian optimization
- URL: http://arxiv.org/abs/2001.00637v1
- Date: Thu, 2 Jan 2020 21:46:48 GMT
- Title: Bayesian task embedding for few-shot Bayesian optimization
- Authors: Steven Atkinson and Sayan Ghosh and Natarajan Chennimalai-Kumar and
Genghis Khan and Liping Wang
- Abstract summary: We describe a method for Bayesian optimization by which one may incorporate data from multiple systems whose quantitative ins are unknown a priori.
All general (nonreal-valued) features of the systems are associated as associated inputs into a single metamodel that simultaneously learns the response surfaces of systems.
We explain how the resulting probabilistic metamodel may be used for Bayesian optimization tasks and demonstrate its implementation on a variety of synthetic and real-world examples.
- Score: 2.722899166098863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe a method for Bayesian optimization by which one may incorporate
data from multiple systems whose quantitative interrelationships are unknown a
priori. All general (nonreal-valued) features of the systems are associated
with continuous latent variables that enter as inputs into a single metamodel
that simultaneously learns the response surfaces of all of the systems.
Bayesian inference is used to determine appropriate beliefs regarding the
latent variables. We explain how the resulting probabilistic metamodel may be
used for Bayesian optimization tasks and demonstrate its implementation on a
variety of synthetic and real-world examples, comparing its performance under
zero-, one-, and few-shot settings against traditional Bayesian optimization,
which usually requires substantially more data from the system of interest.
Related papers
- Multi-Objective Causal Bayesian Optimization [2.5311562666866494]
We propose Multi-Objective Causal Bayesian Optimization (MO-CBO) to identify optimal interventions within a known multi-target causal graph.
We show that MO-CBO can be decomposed into several traditional multi-objective optimization tasks.
The proposed method will be validated on both synthetic and real-world causal graphs.
arXiv Detail & Related papers (2025-02-20T17:26:16Z) - Information-theoretic Bayesian Optimization: Survey and Tutorial [2.3931689873603603]
This paper is about the information theoretical acquisition functions, whose performance typically outperforms the rest acquisition functions.
We also cover how information theory acquisition functions can be adapted to complex optimization scenarios such as the multi-objective, constrained, non-myopic, multi-fidelity, parallel and asynchronous.
arXiv Detail & Related papers (2025-01-22T10:54:15Z) - Structural Entropy Guided Probabilistic Coding [52.01765333755793]
We propose a novel structural entropy-guided probabilistic coding model, named SEPC.
We incorporate the relationship between latent variables into the optimization by proposing a structural entropy regularization loss.
Experimental results across 12 natural language understanding tasks, including both classification and regression tasks, demonstrate the superior performance of SEPC.
arXiv Detail & Related papers (2024-12-12T00:37:53Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Simulation Based Bayesian Optimization [0.6526824510982799]
This paper introduces Simulation Based Bayesian Optimization (SBBO) as a novel approach to optimizing acquisition functions.
SBBO allows the use of surrogate models tailored for spaces with discrete variables.
We demonstrate empirically the effectiveness of SBBO method using various choices of surrogate models.
arXiv Detail & Related papers (2024-01-19T16:56:11Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Preferential Batch Bayesian Optimization [16.141259199997005]
preferential batch Bayesian optimization (PBBO) is a new framework that allows finding the optimum of a latent function of interest.
We show how the acquisitions developed under this framework generalize and augment previous approaches in Bayesian optimization.
arXiv Detail & Related papers (2020-03-25T14:59:15Z) - Practical Bayesian Optimization of Objectives with Conditioning
Variables [1.0497128347190048]
We consider the more general case where a user is faced with multiple problems that each need to be optimized conditional on a state variable.
Similarity across objectives boosts optimization of each objective in two ways.
We propose a framework for conditional optimization: ConBO.
arXiv Detail & Related papers (2020-02-23T22:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.