PMGDA: A Preference-based Multiple Gradient Descent Algorithm
- URL: http://arxiv.org/abs/2402.09492v2
- Date: Fri, 16 Feb 2024 14:01:44 GMT
- Title: PMGDA: A Preference-based Multiple Gradient Descent Algorithm
- Authors: Xiaoyuan Zhang and Xi Lin and Qingfu Zhang
- Abstract summary: It is desirable in many multi-objective machine learning applications, such as multi-task learning, to find a solution that fits a given preference of a decision maker.
This paper proposes a novel predict-and-correct framework for locating a solution that fits the preference of a decision maker.
- Score: 12.600588000788214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is desirable in many multi-objective machine learning applications, such
as multi-task learning with conflicting objectives and multi-objective
reinforcement learning, to find a Pareto solution that can match a given
preference of a decision maker. These problems are often large-scale with
available gradient information but cannot be handled very well by the existing
algorithms. To tackle this critical issue, this paper proposes a novel
predict-and-correct framework for locating a Pareto solution that fits the
preference of a decision maker. In the proposed framework, a constraint
function is introduced in the search progress to align the solution with a
user-specific preference, which can be optimized simultaneously with multiple
objective functions. Experimental results show that our proposed method can
efficiently find a particular Pareto solution under the demand of a decision
maker for standard multiobjective benchmark, multi-task learning, and
multi-objective reinforcement learning problems with more than thousands of
decision variables.
Code is available at: https://github.com/xzhang2523/pmgda. Our code is
current provided in the pgmda.rar attached file and will be open-sourced after
publication.}
Related papers
- Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Scalable Uni-directional Pareto Optimality for Multi-Task Learning with
Constraints [4.4044968357361745]
We propose a scalable MOO solver for Multi-Objective (MOO) problems, including support for optimization under constraints.
An important application of this is to estimate high-dimensional runtime for neural classification tasks.
arXiv Detail & Related papers (2021-10-28T21:35:59Z) - Discovering Diverse Solutions in Deep Reinforcement Learning [84.45686627019408]
Reinforcement learning algorithms are typically limited to learning a single solution of a specified task.
We propose an RL method that can learn infinitely many solutions by training a policy conditioned on a continuous or discrete low-dimensional latent variable.
arXiv Detail & Related papers (2021-03-12T04:54:31Z) - Multi-objective Conflict-based Search for Multi-agent Path Finding [10.354181009277623]
Multi-objective path planners typically compute an ensemble of paths while optimizing a single objective, such as path length.
This article presents an approach named Multi-objective Conflict-based Search (MO-CBS) that bypasses this so-called curse of dimensionality.
arXiv Detail & Related papers (2021-01-11T10:42:38Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Pareto Multi-Task Learning [53.90732663046125]
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously.
It is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other.
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
arXiv Detail & Related papers (2019-12-30T08:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.