Multi-fidelity Bayesian Optimization in Engineering Design
- URL: http://arxiv.org/abs/2311.13050v1
- Date: Tue, 21 Nov 2023 23:22:11 GMT
- Title: Multi-fidelity Bayesian Optimization in Engineering Design
- Authors: Bach Do and Ruda Zhang
- Abstract summary: Multi-fidelity optimization (MFO) and Bayesian optimization (BO)
MF BO has found a niche in solving expensive engineering design optimization problems.
Recent developments of two essential ingredients of MF BO: GP-based MF surrogates and acquisition functions.
- Score: 3.9160947065896803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Resided at the intersection of multi-fidelity optimization (MFO) and Bayesian
optimization (BO), MF BO has found a niche in solving expensive engineering
design optimization problems, thanks to its advantages in incorporating
physical and mathematical understandings of the problems, saving resources,
addressing exploitation-exploration trade-off, considering uncertainty, and
processing parallel computing. The increasing number of works dedicated to MF
BO suggests the need for a comprehensive review of this advanced optimization
technique. In this paper, we survey recent developments of two essential
ingredients of MF BO: Gaussian process (GP) based MF surrogates and acquisition
functions. We first categorize the existing MF modeling methods and MFO
strategies to locate MF BO in a large family of surrogate-based optimization
and MFO algorithms. We then exploit the common properties shared between the
methods from each ingredient of MF BO to describe important GP-based MF
surrogate models and review various acquisition functions. By doing so, we
expect to provide a structured understanding of MF BO. Finally, we attempt to
reveal important aspects that require further research for applications of MF
BO in solving intricate yet important design optimization problems, including
constrained optimization, high-dimensional optimization, optimization under
uncertainty, and multi-objective optimization.
Related papers
- Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Multi-Fidelity Methods for Optimization: A Survey [12.659229934111975]
Multi-fidelity optimization (MFO) balances high-fidelity accuracy with computational efficiency through a hierarchical fidelity approach.
We delve deep into the foundational principles and methodologies of MFO, focusing on three core components -- multi-fidelity surrogate models, fidelity management strategies, and optimization techniques.
This survey highlights the diverse applications of MFO across several key domains, including machine learning, engineering design optimization, and scientific discovery.
arXiv Detail & Related papers (2024-02-15T00:52:34Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Physics-Aware Multifidelity Bayesian Optimization: a Generalized Formulation [0.0]
Multifidelity Bayesian methods (MFBO) allow to include costly high-fidelity responses for a sub-selection of queries only.
State-of-the-art methods rely on a purely data-driven search and do not include explicit information about the physical context.
This paper acknowledges that prior knowledge about the physical domains of engineering problems can be leveraged to accelerate these data-driven searches.
arXiv Detail & Related papers (2023-12-10T09:11:53Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Multi-Fidelity Bayesian Optimization with Unreliable Information Sources [12.509709549771385]
We propose rMFBO (robust MFBO) to make GP-based MFBO schemes robust to the addition of unreliable information sources.
We demonstrate the effectiveness of the proposed methodology on a number of numerical benchmarks.
We expect rMFBO to be particularly useful to reliably include human experts with varying knowledge within BO processes.
arXiv Detail & Related papers (2022-10-25T11:47:33Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Multi-Fidelity Bayesian Optimization via Deep Neural Networks [19.699020509495437]
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
We propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities.
We show the advantages of our method in both synthetic benchmark datasets and real-world applications in engineering design.
arXiv Detail & Related papers (2020-07-06T23:28:40Z) - On the Global Optimality of Model-Agnostic Meta-Learning [133.16370011229776]
Model-a meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior.
We characterize optimality of the stationary points attained by MAML for both learning and supervised learning, where the inner-level outer-level problems are solved via first-order optimization methods.
arXiv Detail & Related papers (2020-06-23T17:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.