Pareto Set Learning for Expensive Multi-Objective Optimization
- URL: http://arxiv.org/abs/2210.08495v1
- Date: Sun, 16 Oct 2022 09:41:54 GMT
- Title: Pareto Set Learning for Expensive Multi-Objective Optimization
- Authors: Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang
- Abstract summary: Expensive multi-objective optimization problems can be found in many real-world applications.
This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO.
- Score: 5.419608513284392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expensive multi-objective optimization problems can be found in many
real-world applications, where their objective function evaluations involve
expensive computations or physical experiments. It is desirable to obtain an
approximate Pareto front with a limited evaluation budget. Multi-objective
Bayesian optimization (MOBO) has been widely used for finding a finite set of
Pareto optimal solutions. However, it is well-known that the whole Pareto set
is on a continuous manifold and can contain infinite solutions. The structural
properties of the Pareto set are not well exploited in existing MOBO methods,
and the finite-set approximation may not contain the most preferred solution(s)
for decision-makers. This paper develops a novel learning-based method to
approximate the whole Pareto set for MOBO, which generalizes the
decomposition-based multi-objective optimization algorithm (MOEA/D) from finite
populations to models. We design a simple and powerful acquisition search
method based on the learned Pareto set, which naturally supports batch
evaluation. In addition, with our proposed model, decision-makers can readily
explore any trade-off area in the approximate Pareto set for flexible
decision-making. This work represents the first attempt to model the Pareto set
for expensive multi-objective optimization. Experimental results on different
synthetic and real-world problems demonstrate the effectiveness of our proposed
method.
Related papers
- Preference-Optimized Pareto Set Learning for Blackbox Optimization [1.9628841617148691]
No single solution exists that can optimize all the objectives simultaneously.
In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives.
Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods.
arXiv Detail & Related papers (2024-08-19T13:23:07Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Multi-Objective Bayesian Optimization with Active Preference Learning [18.066263838953223]
We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in a multi-objective optimization (MOO) problem.
To minimize the interaction cost with the decision maker (DM), we also propose an active learning strategy for the preference estimation.
arXiv Detail & Related papers (2023-11-22T15:24:36Z) - Dealing with Structure Constraints in Evolutionary Pareto Set Learning [11.242036067940289]
In many real-world applications, it could be desirable to have structure constraints on the entire optimal solution set.
We make the first attempt to incorporate the structure constraints into the whole solution set by a single Pareto set model.
With our proposed method, the decision-makers can flexibly trade off the optimality with preferred structures among all solutions.
arXiv Detail & Related papers (2023-10-31T12:53:56Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Pareto Navigation Gradient Descent: a First-Order Algorithm for
Optimization in Pareto Set [17.617944390196286]
Modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions.
We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information.
arXiv Detail & Related papers (2021-10-17T04:07:04Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.