Parallel Predictive Entropy Search for Multi-objective Bayesian
Optimization with Constraints
- URL: http://arxiv.org/abs/2004.00601v2
- Date: Thu, 1 Jul 2021 14:29:30 GMT
- Title: Parallel Predictive Entropy Search for Multi-objective Bayesian
Optimization with Constraints
- Authors: Eduardo C. Garrido-Merch\'an, Daniel Hern\'andez-Lobato
- Abstract summary: Real-world problems often involve the optimization of several objectives under multiple constraints.
This article introduces PPESMOC, an information-based batch method for the simultaneous optimization of black-box functions.
Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world problems often involve the optimization of several objectives
under multiple constraints. An example is the hyper-parameter tuning problem of
machine learning algorithms. In particular, the minimization of the estimation
of the generalization error of a deep neural network and at the same time the
minimization of its prediction time. We may also consider as a constraint that
the deep neural network must be implemented in a chip with an area below some
size. Here, both the objectives and the constraint are black boxes, i.e.,
functions whose analytical expressions are unknown and are expensive to
evaluate. Bayesian optimization (BO) methodologies have given state-of-the-art
results for the optimization of black-boxes. Nevertheless, most BO methods are
sequential and evaluate the objectives and the constraints at just one input
location, iteratively. Sometimes, however, we may have resources to evaluate
several configurations in parallel. Notwithstanding, no parallel BO method has
been proposed to deal with the optimization of multiple objectives under
several constraints. If the expensive evaluations can be carried out in
parallel (as when a cluster of computers is available), sequential evaluations
result in a waste of resources. This article introduces PPESMOC, Parallel
Predictive Entropy Search for Multi-objective Bayesian Optimization with
Constraints, an information-based batch method for the simultaneous
optimization of multiple expensive-to-evaluate black-box functions under the
presence of several constraints. Iteratively, PPESMOC selects a batch of input
locations at which to evaluate the black-boxes so as to maximally reduce the
entropy of the Pareto set of the optimization problem. We present empirical
evidence in the form of synthetic, benchmark and real-world experiments that
illustrate the effectiveness of PPESMOC.
Related papers
- Parallel Bayesian Optimization Using Satisficing Thompson Sampling for
Time-Sensitive Black-Box Optimization [0.0]
We propose satisficing Thompson sampling-based parallel BO approaches, including synchronous and asynchronous versions.
We shift the target from an optimal solution to a satisficing solution that is easier to learn.
The effectiveness of the proposed methods is demonstrated on a fast-charging design problem of Lithium-ion batteries.
arXiv Detail & Related papers (2023-10-19T07:03:51Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Polynomial-Model-Based Optimization for Blackbox Objectives [0.0]
Black-box optimization seeks to find optimal parameters for systems such that a pre-defined objective function is minimized.
PMBO is a novel blackbox that finds the minimum by fitting a surrogate to the objective function.
PMBO is benchmarked against other state-of-the-art algorithms for a given set of artificial, analytical functions.
arXiv Detail & Related papers (2023-09-01T14:11:03Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Sample-Then-Optimize Batch Neural Thompson Sampling [50.800944138278474]
We introduce two algorithms for black-box optimization based on the Thompson sampling (TS) policy.
To choose an input query, we only need to train an NN and then choose the query by maximizing the trained NN.
Our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy.
arXiv Detail & Related papers (2022-10-13T09:01:58Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z) - Learning How to Optimize Black-Box Functions With Extreme Limits on the
Number of Function Evaluations [3.0969191504482243]
We consider black-box optimization in which only an extremely limited number of function evaluations, on the order of around 100, are affordable.
We propose an original method that uses established approaches to propose a set of points for each batch and then down-selects from these candidate points to the number of trials that can be run in parallel.
We achieve an average reduction of 50% of normalized cost, which is a highly significant improvement in performance.
arXiv Detail & Related papers (2021-03-18T15:30:15Z) - Max-value Entropy Search for Multi-Objective Bayesian Optimization with
Constraints [44.25245545568633]
In aviation power system design applications, we need to find the designs that trade-off total energy and the mass while satisfying specific thresholds for motor temperature and voltage of cells.
We propose a new approach referred as em Max-value Entropy Search for Multi-objective Optimization with Constraints (MESMOC) to solve this problem.
MESMOC employs an output-space entropy based acquisition function to efficiently select the sequence of inputs for evaluation to uncover high-quality pareto-set solutions.
arXiv Detail & Related papers (2020-09-01T05:00:01Z) - MOPS-Net: A Matrix Optimization-driven Network forTask-Oriented 3D Point
Cloud Downsampling [86.42733428762513]
MOPS-Net is a novel interpretable deep learning-based method for matrix optimization.
We show that MOPS-Net can achieve favorable performance against state-of-the-art deep learning-based methods over various tasks.
arXiv Detail & Related papers (2020-05-01T14:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.