Resource Aware Multifidelity Active Learning for Efficient Optimization
- URL: http://arxiv.org/abs/2007.04674v1
- Date: Thu, 9 Jul 2020 10:01:32 GMT
- Title: Resource Aware Multifidelity Active Learning for Efficient Optimization
- Authors: Francesco Grassi, Giorgio Manganini, Michele Garraffa, Laura Mainini
- Abstract summary: This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
- Score: 0.8717253904965373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional methods for black box optimization require a considerable number
of evaluations which can be time consuming, unpractical, and often unfeasible
for many engineering applications that rely on accurate representations and
expensive models to evaluate. Bayesian Optimization (BO) methods search for the
global optimum by progressively (actively) learning a surrogate model of the
objective function along the search path. Bayesian optimization can be
accelerated through multifidelity approaches which leverage multiple black-box
approximations of the objective functions that can be computationally cheaper
to evaluate, but still provide relevant information to the search task. Further
computational benefits are offered by the availability of parallel and
distributed computing architectures whose optimal usage is an open opportunity
within the context of active learning. This paper introduces the Resource Aware
Active Learning (RAAL) strategy, a multifidelity Bayesian scheme to accelerate
the optimization of black box functions. At each optimization step, the RAAL
procedure computes the set of best sample locations and the associated fidelity
sources that maximize the information gain to acquire during the
parallel/distributed evaluation of the objective function, while accounting for
the limited computational budget. The scheme is demonstrated for a variety of
benchmark problems and results are discussed for both single fidelity and
multifidelity settings. In particular we observe that the RAAL strategy
optimally seeds multiple points at each iteration allowing for a major speed up
of the optimization task.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement
Learning for Discrete Prompt Optimization [49.60729578316884]
RL-based techniques can be used to search for prompts that maximize a set of user-specified reward functions.
Current techniques focus on maximizing the average of reward functions, which does not necessarily lead to prompts that achieve balance across rewards.
In this paper, we adapt several techniques for multi-objective optimization to RL-based discrete prompt optimization.
arXiv Detail & Related papers (2024-02-18T21:25:09Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - qPOTS: Efficient batch multiobjective Bayesian optimization via Pareto
optimal Thompson sampling [0.0]
A sample-efficient approach to solving multiobjective optimization is via process oracle (GP) surrogates.
We propose a simple, but effective, Thompson sampling based approach where new candidate(s) are chosen from the frontier of random GP sample.
Our approach demonstrates strong empirical performance over the state of the art, both in terms of accuracy and computational efficiency, on synthetic as well as real-world experiments.
arXiv Detail & Related papers (2023-10-24T12:35:15Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - A Robust Multi-Objective Bayesian Optimization Framework Considering
Input Uncertainty [0.0]
In real-life applications like engineering design, the designer often wants to take multiple objectives as well as input uncertainty into account.
We introduce a novel Bayesian optimization framework to efficiently perform multi-objective optimization considering input uncertainty.
arXiv Detail & Related papers (2022-02-25T17:45:26Z) - Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive
Networks [17.370056935194786]
We propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN)
We use a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all fidelities.
We develop a simple yet efficient batch querying method, without any search over fidelities.
arXiv Detail & Related papers (2021-06-18T02:55:48Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations [44.25245545568633]
We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
arXiv Detail & Related papers (2020-09-12T01:46:03Z) - Multi-Fidelity Bayesian Optimization via Deep Neural Networks [19.699020509495437]
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
We propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities.
We show the advantages of our method in both synthetic benchmark datasets and real-world applications in engineering design.
arXiv Detail & Related papers (2020-07-06T23:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.