Constrained Bayesian Optimization Under Partial Observations: Balanced
Improvements and Provable Convergence
- URL: http://arxiv.org/abs/2312.03212v2
- Date: Sat, 23 Dec 2023 01:35:11 GMT
- Title: Constrained Bayesian Optimization Under Partial Observations: Balanced
Improvements and Provable Convergence
- Authors: Shengbo Wang and Ke Li
- Abstract summary: We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.
We present an improved design of the acquisition functions that introduces balanced exploration during optimization.
We propose a Gaussian process embedding different likelihoods as the surrogate model for a partially observable constraint.
- Score: 6.461785985849886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The partially observable constrained optimization problems (POCOPs) impede
data-driven optimization techniques since an infeasible solution of POCOPs can
provide little information about the objective as well as the constraints. We
endeavor to design an efficient and provable method for expensive POCOPs under
the framework of constrained Bayesian optimization. Our method consists of two
key components. Firstly, we present an improved design of the acquisition
functions that introduces balanced exploration during optimization. We
rigorously study the convergence properties of this design to demonstrate its
effectiveness. Secondly, we propose a Gaussian process embedding different
likelihoods as the surrogate model for a partially observable constraint. This
model leads to a more accurate representation of the feasible regions compared
to traditional classification-based models. Our proposed method is empirically
studied on both synthetic and real-world problems. The results demonstrate the
competitiveness of our method for solving POCOPs.
Related papers
- On Sampling Strategies for Spectral Model Sharding [7.185534285278903]
In this work, we present two sampling strategies for such sharding.
The first produces unbiased estimators of the original weights, while the second aims to minimize the squared approximation error.
We demonstrate that both of these methods can lead to improved performance on various commonly used datasets.
arXiv Detail & Related papers (2024-10-31T16:37:25Z) - Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Optimizing $CO_{2}$ Capture in Pressure Swing Adsorption Units: A Deep
Neural Network Approach with Optimality Evaluation and Operating Maps for
Decision-Making [0.0]
This study focuses on enhancing Pressure Swing Adsorption units for carbon dioxide capture.
We developed and implemented a multiple-input, single-output (MISO) framework comprising two deep neural network (DNN) models.
This approach delineated feasible operational regions (FORs) and highlighted the spectrum of optimal decision-making scenarios.
arXiv Detail & Related papers (2023-12-06T19:43:37Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Differentiable Multi-Target Causal Bayesian Experimental Design [43.76697029708785]
We introduce a gradient-based approach for the problem of Bayesian optimal experimental design to learn causal models in a batch setting.
Existing methods rely on greedy approximations to construct a batch of experiments.
We propose a conceptually simple end-to-end gradient-based optimization procedure to acquire a set of optimal intervention target-state pairs.
arXiv Detail & Related papers (2023-02-21T11:32:59Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Bayesian Optimisation for Constrained Problems [0.0]
We propose a novel variant of the well-known Knowledge Gradient acquisition function that allows it to handle constraints.
We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance.
arXiv Detail & Related papers (2021-05-27T15:43:09Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.