Density Ratio Estimation-based Bayesian Optimization with
Semi-Supervised Learning
- URL: http://arxiv.org/abs/2305.15612v2
- Date: Fri, 6 Oct 2023 17:13:24 GMT
- Title: Density Ratio Estimation-based Bayesian Optimization with
Semi-Supervised Learning
- Authors: Jungtaek Kim
- Abstract summary: We propose density ratio estimation-based Bayesian optimization with semi-supervised learning.
We demonstrate the experimental results of our methods and several baseline methods in two distinct scenarios with unlabeled point sampling and a fixed-size pool.
- Score: 5.346298077607419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization has attracted huge attention from diverse research
areas in science and engineering, since it is capable of finding a global
optimum of an expensive-to-evaluate black-box function efficiently. In general,
a probabilistic regression model, e.g., Gaussian processes and Bayesian neural
networks, is widely used as a surrogate function to model an explicit
distribution over function evaluations given an input to estimate and a
training dataset. Beyond the probabilistic regression-based Bayesian
optimization, density ratio estimation-based Bayesian optimization has been
suggested in order to estimate a density ratio of the groups relatively close
and relatively far to a global optimum. Developing this line of research
further, a supervised classifier can be employed to estimate a class
probability for the two groups instead of a density ratio. However, the
supervised classifiers used in this strategy are prone to be overconfident for
a global solution candidate. To solve this problem, we propose density ratio
estimation-based Bayesian optimization with semi-supervised learning. Finally,
we demonstrate the experimental results of our methods and several baseline
methods in two distinct scenarios with unlabeled point sampling and a
fixed-size pool.
Related papers
- An Asymptotically Optimal Coordinate Descent Algorithm for Learning Bayesian Networks from Gaussian Models [6.54203362045253]
We study the problem of learning networks from continuous observational data, generated according to a linear Gaussian structural equation model.
We propose a new coordinate descent algorithm that converges to the optimal objective value of the $ell$penalized maximum likelihood.
arXiv Detail & Related papers (2024-08-21T20:18:03Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under strongly log-concave data.
Our class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - A Mean Field Approach to Empirical Bayes Estimation in High-dimensional
Linear Regression [8.345523969593492]
We study empirical Bayes estimation in high-dimensional linear regression.
We adopt a variational empirical Bayes approach, introduced originally in Carbonetto and Stephens (2012) and Kim et al. (2022).
This provides the first rigorous empirical Bayes method in a high-dimensional regression setting without sparsity.
arXiv Detail & Related papers (2023-09-28T20:51:40Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - BORE: Bayesian Optimization by Density-Ratio Estimation [34.22533785573784]
We cast the expected improvement (EI) function as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation.
This reformulation provides numerous advantages, not least in terms of versatility, and scalability.
arXiv Detail & Related papers (2021-02-17T20:04:11Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.