A portfolio approach to massively parallel Bayesian optimization
- URL: http://arxiv.org/abs/2110.09334v2
- Date: Mon, 3 Apr 2023 09:44:41 GMT
- Title: A portfolio approach to massively parallel Bayesian optimization
- Authors: Mickael Binois (ACUMES, JAD), Nicholson Collier (ANL), Jonathan Ozik
(ANL)
- Abstract summary: One way to reduce the time of conducting optimization studies is to evaluate designs rather than just one-at-a-time.
For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed.
These experiments show orders of magnitude speed improvements over existing methods with similar or better performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One way to reduce the time of conducting optimization studies is to evaluate
designs in parallel rather than just one-at-a-time. For expensive-to-evaluate
black-boxes, batch versions of Bayesian optimization have been proposed. They
work by building a surrogate model of the black-box to simultaneously select
multiple designs via an infill criterion. Still, despite the increased
availability of computing resources that enable large-scale parallelism, the
strategies that work for selecting a few tens of parallel designs for
evaluations become limiting due to the complexity of selecting more designs. It
is even more crucial when the black-box is noisy, necessitating more
evaluations as well as repeating experiments. Here we propose a scalable
strategy that can keep up with massive batching natively, focused on the
exploration/exploitation trade-off and a portfolio allocation. We compare the
approach with related methods on noisy functions, for mono and multi-objective
optimization tasks. These experiments show orders of magnitude speed
improvements over existing methods with similar or better performance.
Related papers
- Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization [65.64108848398696]
We introduce a preference optimization process to enhance the multimodal reasoning capabilities of MLLMs.
We develop a simple yet effective method, termed Mixed Preference Optimization (MPO), which boosts multimodal CoT performance.
Our model, InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10x larger InternVL2-76B.
arXiv Detail & Related papers (2024-11-15T18:59:27Z) - Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback [64.67540769692074]
Large language models (LLMs) fine-tuned with alignment techniques, such as reinforcement learning from human feedback, have been instrumental in developing some of the most capable AI systems to date.
We introduce an approach called Margin Matching Preference Optimization (MMPO), which incorporates relative quality margins into optimization, leading to improved LLM policies and reward models.
Experiments with both human and AI feedback data demonstrate that MMPO consistently outperforms baseline methods, often by a substantial margin, on popular benchmarks including MT-bench and RewardBench.
arXiv Detail & Related papers (2024-10-04T04:56:11Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Large-Batch, Iteration-Efficient Neural Bayesian Design Optimization [37.339567743948955]
We present a novel Bayesian optimization framework specifically tailored to address the limitations of BO.
Our key contribution is a highly scalable, sample-based acquisition function that performs a non-dominated sorting of objectives.
We show that our acquisition function in combination with different Bayesian neural network surrogates is effective in data-intensive environments with a minimal number of iterations.
arXiv Detail & Related papers (2023-06-01T19:10:57Z) - Transfer Learning for Bayesian Optimization: A Survey [29.229660973338145]
Black-box optimization is a powerful tool that models and optimize such expensive "black-box" functions.
Researchers in the BO community propose to incorporate the spirit of transfer learning to accelerate optimization process.
arXiv Detail & Related papers (2023-02-12T14:37:25Z) - Combining Multi-Fidelity Modelling and Asynchronous Batch Bayesian
Optimization [10.29946890434873]
This paper proposes an algorithm combining multi-fidelity and asynchronous batch methods.
We empirically study the algorithm behavior, and show it can outperform single-fidelity batch methods and multi-fidelity sequential methods.
As an application, we consider designing electrode materials for optimal performance in pouch cells using experiments with coin cells to approximate battery performance.
arXiv Detail & Related papers (2022-11-11T12:02:40Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Design-Bench: Benchmarks for Data-Driven Offline Model-Based
Optimization [82.02008764719896]
Black-box model-based optimization problems are ubiquitous in a wide range of domains, such as the design of proteins, DNA sequences, aircraft, and robots.
We present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods.
Our benchmark includes a suite of diverse and realistic tasks derived from real-world optimization problems in biology, materials science, and robotics.
arXiv Detail & Related papers (2022-02-17T05:33:27Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Parallel Predictive Entropy Search for Multi-objective Bayesian
Optimization with Constraints [0.0]
Real-world problems often involve the optimization of several objectives under multiple constraints.
This article introduces PPESMOC, an information-based batch method for the simultaneous optimization of black-box functions.
Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes.
arXiv Detail & Related papers (2020-04-01T17:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.