UMOEA/D: A Multiobjective Evolutionary Algorithm for Uniform Pareto
Objectives based on Decomposition
- URL: http://arxiv.org/abs/2402.09486v1
- Date: Wed, 14 Feb 2024 08:09:46 GMT
- Title: UMOEA/D: A Multiobjective Evolutionary Algorithm for Uniform Pareto
Objectives based on Decomposition
- Authors: Xiaoyuan Zhang and Xi Lin and Yichi Zhang and Yifan Chen and Qingfu
Zhang
- Abstract summary: Multiobjective optimization (MOO) is prevalent in numerous applications.
Previous methods commonly utilize the set of Pareto objectives (particles on the PF) to represent the entire PF.
We suggest in this paper constructing emphuniformly distributed objectives on PF, so as to alleviate the limited diversity found in previous MOO approaches.
- Score: 19.13435817442015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiobjective optimization (MOO) is prevalent in numerous applications, in
which a Pareto front (PF) is constructed to display optima under various
preferences. Previous methods commonly utilize the set of Pareto objectives
(particles on the PF) to represent the entire PF. However, the empirical
distribution of the Pareto objectives on the PF is rarely studied, which
implicitly impedes the generation of diverse and representative Pareto
objectives in previous methods. To bridge the gap, we suggest in this paper
constructing \emph{uniformly distributed} Pareto objectives on the PF, so as to
alleviate the limited diversity found in previous MOO approaches. We are the
first to formally define the concept of ``uniformity" for an MOO problem. We
optimize the maximal minimal distances on the Pareto front using a neural
network, resulting in both asymptotically and non-asymptotically uniform Pareto
objectives. Our proposed method is validated through experiments on real-world
and synthetic problems, which demonstrates the efficacy in generating
high-quality uniform Pareto objectives and the encouraging performance
exceeding existing state-of-the-art methods.
The detailed model implementation and the code are scheduled to be
open-sourced upon publication.
Related papers
- How to Find the Exact Pareto Front for Multi-Objective MDPs? [28.70863169250383]
Multi-objective Markov Decision Processes (MDPs) are receiving increasing attention, as real-world decision-making problems often involve conflicting objectives that cannot be addressed by a single-objective MDP.
The Pareto front identifies the set of policies that cannot be dominated, providing a foundation for finding optimal solutions that can efficiently adapt to various preferences.
arXiv Detail & Related papers (2024-10-21T01:03:54Z) - Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [49.14535254003683]
PaLoRA is a novel parameter-efficient method that augments the original model with task-specific low-rank adapters.
Our experimental results show that PaLoRA outperforms MTL and PFL baselines across various datasets.
arXiv Detail & Related papers (2024-07-10T21:25:51Z) - A Newton Method for Hausdorff Approximations of the Pareto Front within Multi-objective Evolutionary Algorithms [3.2888428450190044]
We propose a set-based Newton method for Hausdorff approximations of the Pareto front to be used within multi-objective evolutionary algorithms.
We show the benefit of the Newton method as a post-processing step on several benchmark test functions and different base evolutionary algorithms.
arXiv Detail & Related papers (2024-05-09T12:34:34Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Divide and Conquer: Provably Unveiling the Pareto Front with
Multi-Objective Reinforcement Learning [2.5115843173830252]
We introduce IPRO, a principled algorithm that decomposes the task of finding the Pareto front into a sequence of single-objective problems.
Empirical evaluations demonstrate that IPRO matches or outperforms methods that require additional domain knowledge.
By leveraging problem-specific single-objective solvers, our approach holds promise for applications beyond multi-objective reinforcement learning.
arXiv Detail & Related papers (2024-02-11T12:35:13Z) - Variance-Preserving-Based Interpolation Diffusion Models for Speech
Enhancement [53.2171981279647]
We present a framework that encapsulates both the VP- and variance-exploding (VE)-based diffusion methods.
To improve performance and ease model training, we analyze the common difficulties encountered in diffusion models.
We evaluate our model against several methods using a public benchmark to showcase the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-14T14:22:22Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Pareto Navigation Gradient Descent: a First-Order Algorithm for
Optimization in Pareto Set [17.617944390196286]
Modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions.
We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information.
arXiv Detail & Related papers (2021-10-17T04:07:04Z) - Self-Evolutionary Optimization for Pareto Front Learning [34.17125297176668]
Multi-objective optimization (MOO) approaches have been proposed for multitasking problems.
Recent MOO methods approximate multiple optimal solutions (Pareto front) with a single unified model.
We show that PFL can be re-formulated into another MOO problem with multiple objectives, each of which corresponds to different preference weights for the tasks.
arXiv Detail & Related papers (2021-10-07T13:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.