Posterior Inference in Latent Space for Scalable Constrained Black-box Optimization
- URL: http://arxiv.org/abs/2507.00480v1
- Date: Tue, 01 Jul 2025 06:55:36 GMT
- Title: Posterior Inference in Latent Space for Scalable Constrained Black-box Optimization
- Authors: Kiyoung Om, Kyuil Sim, Taeyoung Yun, Hyeongyu Kang, Jinkyoo Park,
- Abstract summary: generative model-based approaches have emerged as a promising alternative for constrained optimization.<n>We propose a new framework to overcome these challenges.<n>Our method achieves superior performance on various synthetic and real-world constrained black-box optimization tasks.
- Score: 14.037021165033778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimizing high-dimensional black-box functions under black-box constraints is a pervasive task in a wide range of scientific and engineering problems. These problems are typically harder than unconstrained problems due to hard-to-find feasible regions. While Bayesian optimization (BO) methods have been developed to solve such problems, they often struggle with the curse of dimensionality. Recently, generative model-based approaches have emerged as a promising alternative for constrained optimization. However, they suffer from poor scalability and are vulnerable to mode collapse, particularly when the target distribution is highly multi-modal. In this paper, we propose a new framework to overcome these challenges. Our method iterates through two stages. First, we train flow-based models to capture the data distribution and surrogate models that predict both function values and constraint violations with uncertainty quantification. Second, we cast the candidate selection problem as a posterior inference problem to effectively search for promising candidates that have high objective values while not violating the constraints. During posterior inference, we find that the posterior distribution is highly multi-modal and has a large plateau due to constraints, especially when constraint feedback is given as binary indicators of feasibility. To mitigate this issue, we amortize the sampling from the posterior distribution in the latent space of flow-based models, which is much smoother than that in the data space. We empirically demonstrate that our method achieves superior performance on various synthetic and real-world constrained black-box optimization tasks. Our code is publicly available \href{https://github.com/umkiyoung/CiBO}{here}.
Related papers
- Posterior Inference with Diffusion Models for High-dimensional Black-box Optimization [17.92257026306603]
generative models have emerged to solve black-box optimization problems.<n>We introduce textbfDiBO, a novel framework for solving high-dimensional black-box optimization problems.<n>Our method outperforms state-of-the-art baselines across various synthetic and real-world black-box optimization tasks.
arXiv Detail & Related papers (2025-02-24T04:19:15Z) - Non-Myopic Multi-Objective Bayesian Optimization [64.31753000439514]
We consider the problem of finite-horizon sequential experimental design to solve multi-objective optimization problems.<n>This problem arises in many real-world applications, including materials design.<n>We propose the first set of non-myopic methods for MOO problems.
arXiv Detail & Related papers (2024-12-11T04:05:29Z) - Model Ensembling for Constrained Optimization [7.4351710906830375]
We consider a setting in which we wish to ensemble models for multidimensional output predictions that are in turn used for downstream optimization.
More precisely, we imagine we are given a number of models mapping a state space to multidimensional real-valued predictions.
These predictions form the coefficients of a linear objective that we would like to optimize under specified constraints.
We apply multicalibration techniques that lead to two provably efficient and convergent algorithms.
arXiv Detail & Related papers (2024-05-27T01:48:07Z) - Boundary Exploration for Bayesian Optimization With Unknown Physical Constraints [37.095510211590984]
We propose BE-CBO, a new Bayesian optimization method that efficiently explores the boundary between feasible and infeasible designs.
Our method demonstrates superior performance against state-of-the-art methods through comprehensive experiments on synthetic and real-world benchmarks.
arXiv Detail & Related papers (2024-02-12T14:59:40Z) - Primal Dual Continual Learning: Balancing Stability and Plasticity through Adaptive Memory Allocation [86.8475564814154]
We show that it is both possible and beneficial to undertake the constrained optimization problem directly.
We focus on memory-based methods, where a small subset of samples from previous tasks can be stored in a replay buffer.
We show that dual variables indicate the sensitivity of the optimal value of the continual learning problem with respect to constraint perturbations.
arXiv Detail & Related papers (2023-09-29T21:23:27Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Distributionally Robust Bayesian Optimization with $\varphi$-divergences [45.48814080654241]
We consider robustness against data-shift in $varphi$-divergences, which subsumes many popular choices, such as the Total Variation, and the extant Kullback-Leibler divergence.
We show that the DRO-BO problem in this setting is equivalent to a finite-dimensional optimization problem which, even in the continuous context setting, can be easily implemented with provable sublinear regret bounds.
arXiv Detail & Related papers (2022-03-04T04:34:52Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Projection & Probability-Driven Black-Box Attack [205.9923346080908]
Existing black-box attacks suffer from the need for excessive queries in the high-dimensional space.
We propose Projection & Probability-driven Black-box Attack (PPBA) to tackle this problem.
Our method requires at most 24% fewer queries with a higher attack success rate compared with state-of-the-art approaches.
arXiv Detail & Related papers (2020-05-08T03:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.