Resource Aware Multifidelity Active Learning for Efficient Optimization
- URL: http://arxiv.org/abs/2007.04674v1
- Date: Thu, 9 Jul 2020 10:01:32 GMT
- Title: Resource Aware Multifidelity Active Learning for Efficient Optimization
- Authors: Francesco Grassi, Giorgio Manganini, Michele Garraffa, Laura Mainini
- Abstract summary: This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
- Score: 0.8717253904965373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional methods for black box optimization require a considerable number
of evaluations which can be time consuming, unpractical, and often unfeasible
for many engineering applications that rely on accurate representations and
expensive models to evaluate. Bayesian Optimization (BO) methods search for the
global optimum by progressively (actively) learning a surrogate model of the
objective function along the search path. Bayesian optimization can be
accelerated through multifidelity approaches which leverage multiple black-box
approximations of the objective functions that can be computationally cheaper
to evaluate, but still provide relevant information to the search task. Further
computational benefits are offered by the availability of parallel and
distributed computing architectures whose optimal usage is an open opportunity
within the context of active learning. This paper introduces the Resource Aware
Active Learning (RAAL) strategy, a multifidelity Bayesian scheme to accelerate
the optimization of black box functions. At each optimization step, the RAAL
procedure computes the set of best sample locations and the associated fidelity
sources that maximize the information gain to acquire during the
parallel/distributed evaluation of the objective function, while accounting for
the limited computational budget. The scheme is demonstrated for a variety of
benchmark problems and results are discussed for both single fidelity and
multifidelity settings. In particular we observe that the RAAL strategy
optimally seeds multiple points at each iteration allowing for a major speed up
of the optimization task.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.
Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.
We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A Robust Multi-Objective Bayesian Optimization Framework Considering
Input Uncertainty [0.0]
In real-life applications like engineering design, the designer often wants to take multiple objectives as well as input uncertainty into account.
We introduce a novel Bayesian optimization framework to efficiently perform multi-objective optimization considering input uncertainty.
arXiv Detail & Related papers (2022-02-25T17:45:26Z) - Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive
Networks [17.370056935194786]
We propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN)
We use a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all fidelities.
We develop a simple yet efficient batch querying method, without any search over fidelities.
arXiv Detail & Related papers (2021-06-18T02:55:48Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations [44.25245545568633]
We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
arXiv Detail & Related papers (2020-09-12T01:46:03Z) - Multi-Fidelity Bayesian Optimization via Deep Neural Networks [19.699020509495437]
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
We propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities.
We show the advantages of our method in both synthetic benchmark datasets and real-world applications in engineering design.
arXiv Detail & Related papers (2020-07-06T23:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.