A penalisation method for batch multi-objective Bayesian optimisation
with application in heat exchanger design
- URL: http://arxiv.org/abs/2206.13326v1
- Date: Mon, 27 Jun 2022 14:16:54 GMT
- Title: A penalisation method for batch multi-objective Bayesian optimisation
with application in heat exchanger design
- Authors: Andrei Paleyes, Henry B. Moss, Victor Picheny, Piotr Zulawski, Felix
Newman
- Abstract summary: We present a batch acquisition function that enables multi-objective Bayesian optimisation methods to efficiently exploit parallel processing resources.
We show that by encouraging batch diversity through penalising evaluations with similar predicted objective values, HIPPO is able to cheaply build large batches of informative points.
- Score: 3.867356784754811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HIghly Parallelisable Pareto Optimisation (HIPPO) -- a batch
acquisition function that enables multi-objective Bayesian optimisation methods
to efficiently exploit parallel processing resources. Multi-Objective Bayesian
Optimisation (MOBO) is a very efficient tool for tackling expensive black-box
problems. However, most MOBO algorithms are designed as purely sequential
strategies, and existing batch approaches are prohibitively expensive for all
but the smallest of batch sizes. We show that by encouraging batch diversity
through penalising evaluations with similar predicted objective values, HIPPO
is able to cheaply build large batches of informative points. Our extensive
experimental validation demonstrates that HIPPO is at least as efficient as
existing alternatives whilst incurring an order of magnitude lower
computational overhead and scaling easily to batch sizes considerably higher
than currently supported in the literature. Additionally, we demonstrate the
application of HIPPO to a challenging heat exchanger design problem, stressing
the real-world utility of our highly parallelisable approach to MOBO.
Related papers
- Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.
Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.
We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Adaptive Batch Sizes for Active Learning A Probabilistic Numerics
Approach [28.815294991377645]
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation.
This fixed approach is inefficient because of a dynamic trade-off between cost and speed.
We propose a novel Probabilistics framework that adaptively changes batch sizes.
arXiv Detail & Related papers (2023-06-09T12:17:18Z) - Large-Batch, Iteration-Efficient Neural Bayesian Design Optimization [37.339567743948955]
We present a novel Bayesian optimization framework specifically tailored to address the limitations of BO.
Our key contribution is a highly scalable, sample-based acquisition function that performs a non-dominated sorting of objectives.
We show that our acquisition function in combination with different Bayesian neural network surrogates is effective in data-intensive environments with a minimal number of iterations.
arXiv Detail & Related papers (2023-06-01T19:10:57Z) - Advancing Model Pruning via Bi-level Optimization [89.88761425199598]
iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'
One-shot pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.
We show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure.
arXiv Detail & Related papers (2022-10-08T19:19:29Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - Enhancing Explainability of Hyperparameter Optimization via Bayesian
Algorithm Execution [13.037647287689438]
We study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots.
We propose a modified HPO method which efficiently searches for optimum global predictive performance.
Our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.
arXiv Detail & Related papers (2022-06-11T07:12:04Z) - Batch Sequential Adaptive Designs for Global Optimization [5.825138898746968]
Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems.
For those multiple points EGO methods, the heavy computation and points clustering are the obstacles.
In this work, a novel batch SAD method, named "accelerated EGO", is forwarded by using a refined sampling/importance resampling (SIR) method.
The efficiency of the proposed SAD is validated by nine classic test functions with dimension from 2 to 12.
arXiv Detail & Related papers (2020-10-21T01:11:35Z) - Differentiable Expected Hypervolume Improvement for Parallel
Multi-Objective Bayesian Optimization [11.956059322407437]
We leverage recent advances in programming models and hardware acceleration for multi-objective BO using Expected Hyper Improvement (EHVI)
We derive a novel formulation of q-Expected Hyper Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting.
Our empirical evaluation demonstrates that qEHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.
arXiv Detail & Related papers (2020-06-09T06:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.