Knowledge Gradient for Multi-Objective Bayesian Optimization with Decoupled Evaluations
- URL: http://arxiv.org/abs/2302.01310v2
- Date: Wed, 09 Oct 2024 08:50:08 GMT
- Title: Knowledge Gradient for Multi-Objective Bayesian Optimization with Decoupled Evaluations
- Authors: Jack M. Buckingham, Sebastian Rojas Gonzalez, Juergen Branke,
- Abstract summary: In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective.
We propose a scalarization based knowledge acquisition function which accounts for the different evaluation costs of the objectives.
- Score: 0.0
- License:
- Abstract: Multi-objective Bayesian optimization aims to find the Pareto front of trade-offs between a set of expensive objectives while collecting as few samples as possible. In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective. This decoupling of the objectives presents an opportunity to learn the Pareto front faster by avoiding unnecessary, expensive evaluations. We propose a scalarization based knowledge gradient acquisition function which accounts for the different evaluation costs of the objectives. We prove asymptotic consistency of the estimator of the optimum for an arbitrary, D-dimensional, real compact search space and show empirically that the algorithm performs comparably with the state of the art and significantly outperforms versions which always evaluate both objectives.
Related papers
- Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - Deep Pareto Reinforcement Learning for Multi-Objective Recommender Systems [60.91599969408029]
optimizing multiple objectives simultaneously is an important task for recommendation platforms.
Existing multi-objective recommender systems do not systematically consider such dynamic relationships.
arXiv Detail & Related papers (2024-07-04T02:19:49Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - Pareto Set Learning for Expensive Multi-Objective Optimization [5.419608513284392]
Expensive multi-objective optimization problems can be found in many real-world applications.
This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO.
arXiv Detail & Related papers (2022-10-16T09:41:54Z) - Alleviating Search Bias in Bayesian Evolutionary Optimization with Many
Heterogeneous Objectives [9.139734850798124]
We deal with multi-objective optimization problems with heterogeneous objectives (HE-MOPs)
A new acquisition function that mitigates search bias towards the fast objectives is suggested.
We demonstrate the effectiveness of the proposed algorithm by testing it on widely used multi-/many-objective benchmark problems.
arXiv Detail & Related papers (2022-08-25T17:07:40Z) - Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via
Latent Model Ensembles [73.15950858151594]
This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards.
We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling.
We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives.
arXiv Detail & Related papers (2020-10-27T22:06:57Z) - Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations [44.25245545568633]
We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
arXiv Detail & Related papers (2020-09-12T01:46:03Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z) - FlexiBO: A Decoupled Cost-Aware Multi-Objective Optimization Approach
for Deep Neural Networks [4.596221278839825]
We develop a novel multi-objective optimization algorithm, we call Flexible Multi-Objective Bayesian Optimization (FlexiBO) to address this issue.
We evaluate FlexiBO on seven state-of-the-art DNNs for image recognition, natural language processing (NLP), and speech-to-text translation.
arXiv Detail & Related papers (2020-01-18T03:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.