Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations
- URL: http://arxiv.org/abs/2009.05700v3
- Date: Mon, 23 Nov 2020 01:46:09 GMT
- Title: Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations
- Authors: Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
- Abstract summary: We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
- Score: 44.25245545568633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world applications involve black-box optimization of multiple
objectives using continuous function approximations that trade-off accuracy and
resource cost of evaluation. For example, in rocket launching research, we need
to find designs that trade-off return-time and angular distance using
continuous-fidelity simulators (e.g., varying tolerance parameter to trade-off
simulation time and accuracy) for design evaluations. The goal is to
approximate the optimal Pareto set by minimizing the cost for evaluations. In
this paper, we propose a novel approach referred to as information-Theoretic
Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA)}
to solve this problem. The key idea is to select the sequence of input and
function approximations for multiple objectives which maximize the information
gain per unit cost for the optimal Pareto front. Our experiments on diverse
synthetic and real-world benchmarks show that iMOCA significantly improves over
existing single-fidelity methods.
Related papers
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Knowledge Gradient for Multi-Objective Bayesian Optimization with Decoupled Evaluations [0.0]
In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective.
We propose a scalarization based knowledge acquisition function which accounts for the different evaluation costs of the objectives.
arXiv Detail & Related papers (2023-02-02T18:33:34Z) - $\{\text{PF}\}^2\text{ES}$: Parallel Feasible Pareto Frontier Entropy
Search for Multi-Objective Bayesian Optimization Under Unknown Constraints [4.672142224503371]
We present a novel information-theoretic acquisition function for multi-objective Bayesian optimization.
$textPF2$ES provides a low cost and accurate estimate of the mutual information for the parallel setting.
We benchmark $textPF2$ES across synthetic and real-life problems.
arXiv Detail & Related papers (2022-04-11T21:06:23Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z) - Multi-Fidelity Bayesian Optimization via Deep Neural Networks [19.699020509495437]
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
We propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities.
We show the advantages of our method in both synthetic benchmark datasets and real-world applications in engineering design.
arXiv Detail & Related papers (2020-07-06T23:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.