Multi-Fidelity Methods for Optimization: A Survey
- URL: http://arxiv.org/abs/2402.09638v1
- Date: Thu, 15 Feb 2024 00:52:34 GMT
- Title: Multi-Fidelity Methods for Optimization: A Survey
- Authors: Ke Li and Fan Li
- Abstract summary: Multi-fidelity optimization (MFO) balances high-fidelity accuracy with computational efficiency through a hierarchical fidelity approach.
We delve deep into the foundational principles and methodologies of MFO, focusing on three core components -- multi-fidelity surrogate models, fidelity management strategies, and optimization techniques.
This survey highlights the diverse applications of MFO across several key domains, including machine learning, engineering design optimization, and scientific discovery.
- Score: 12.659229934111975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world black-box optimization often involves time-consuming or costly
experiments and simulations. Multi-fidelity optimization (MFO) stands out as a
cost-effective strategy that balances high-fidelity accuracy with computational
efficiency through a hierarchical fidelity approach. This survey presents a
systematic exploration of MFO, underpinned by a novel text mining framework
based on a pre-trained language model. We delve deep into the foundational
principles and methodologies of MFO, focusing on three core components --
multi-fidelity surrogate models, fidelity management strategies, and
optimization techniques. Additionally, this survey highlights the diverse
applications of MFO across several key domains, including machine learning,
engineering design optimization, and scientific discovery, showcasing the
adaptability and effectiveness of MFO in tackling complex computational
challenges. Furthermore, we also envision several emerging challenges and
prospects in the MFO landscape, spanning scalability, the composition of lower
fidelities, and the integration of human-in-the-loop approaches at the
algorithmic level. We also address critical issues related to benchmarking and
the advancement of open science within the MFO community. Overall, this survey
aims to catalyze further research and foster collaborations in MFO, setting the
stage for future innovations and breakthroughs in the field.
Related papers
- Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms [3.833708891059351]
Large Language Models (LLMs) and Evolutionary Algorithms (EAs) offer promising new approach to overcome limitations and make optimization more automated.
LLMs act as dynamic agents that can generate, refine, and interpret optimization strategies.
EAs efficiently explore complex solution spaces through evolutionary operators.
arXiv Detail & Related papers (2024-10-28T09:04:49Z) - Unraveling the Versatility and Impact of Multi-Objective Optimization: Algorithms, Applications, and Trends for Solving Complex Real-World Problems [4.023511716339818]
Multi-Objective Optimization (MOO) techniques have become increasingly popular in recent years.
This paper examines recently developed MOO-based algorithms.
In real-world case studies, MOO algorithms address complicated decision-making challenges.
arXiv Detail & Related papers (2024-06-29T15:19:46Z) - RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning [8.389454219309837]
multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations.
We propose RLEMMO, a Meta-Black-Box Optimization framework, which maintains a population of solutions and incorporates a reinforcement learning agent.
With a novel reward mechanism that encourages both quality and diversity, RLEMMO can be effectively trained using a policy gradient algorithm.
arXiv Detail & Related papers (2024-04-12T05:02:49Z) - Physics-Aware Multifidelity Bayesian Optimization: a Generalized Formulation [0.0]
Multifidelity Bayesian methods (MFBO) allow to include costly high-fidelity responses for a sub-selection of queries only.
State-of-the-art methods rely on a purely data-driven search and do not include explicit information about the physical context.
This paper acknowledges that prior knowledge about the physical domains of engineering problems can be leveraged to accelerate these data-driven searches.
arXiv Detail & Related papers (2023-12-10T09:11:53Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - Multi-fidelity Bayesian Optimization in Engineering Design [3.9160947065896803]
Multi-fidelity optimization (MFO) and Bayesian optimization (BO)
MF BO has found a niche in solving expensive engineering design optimization problems.
Recent developments of two essential ingredients of MF BO: GP-based MF surrogates and acquisition functions.
arXiv Detail & Related papers (2023-11-21T23:22:11Z) - A Survey of Contextual Optimization Methods for Decision Making under
Uncertainty [47.73071218563257]
This review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations.
We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks.
arXiv Detail & Related papers (2023-06-17T15:21:02Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.