Improving Pareto Front Learning via Multi-Sample Hypernetworks
- URL: http://arxiv.org/abs/2212.01130v7
- Date: Fri, 28 Apr 2023 18:32:25 GMT
- Title: Improving Pareto Front Learning via Multi-Sample Hypernetworks
- Authors: Long P. Hoang, Dung D. Le, Tran Anh Tuan, Tran Ngoc Thang
- Abstract summary: We propose a novel PFL framework namely PHN-HVI, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences.
The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines.
- Score: 4.129225533930966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pareto Front Learning (PFL) was recently introduced as an effective approach
to obtain a mapping function from a given trade-off vector to a solution on the
Pareto front, which solves the multi-objective optimization (MOO) problem. Due
to the inherent trade-off between conflicting objectives, PFL offers a flexible
approach in many scenarios in which the decision makers can not specify the
preference of one Pareto solution over another, and must switch between them
depending on the situation. However, existing PFL methods ignore the
relationship between the solutions during the optimization process, which
hinders the quality of the obtained front. To overcome this issue, we propose a
novel PFL framework namely PHN-HVI, which employs a hypernetwork to generate
multiple solutions from a set of diverse trade-off preferences and enhance the
quality of the Pareto front by maximizing the Hypervolume indicator defined by
these solutions. The experimental results on several MOO machine learning tasks
show that the proposed framework significantly outperforms the baselines in
producing the trade-off Pareto front.
Related papers
- A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Preference-Optimized Pareto Set Learning for Blackbox Optimization [1.9628841617148691]
No single solution exists that can optimize all the objectives simultaneously.
In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives.
Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods.
arXiv Detail & Related papers (2024-08-19T13:23:07Z) - Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [49.14535254003683]
PaLoRA is a novel parameter-efficient method that augments the original model with task-specific low-rank adapters.
Our experimental results show that PaLoRA outperforms MTL and PFL baselines across various datasets.
arXiv Detail & Related papers (2024-07-10T21:25:51Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Self-Evolutionary Optimization for Pareto Front Learning [34.17125297176668]
Multi-objective optimization (MOO) approaches have been proposed for multitasking problems.
Recent MOO methods approximate multiple optimal solutions (Pareto front) with a single unified model.
We show that PFL can be re-formulated into another MOO problem with multiple objectives, each of which corresponds to different preference weights for the tasks.
arXiv Detail & Related papers (2021-10-07T13:38:57Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume
Maximization [0.0]
Real-world problems are often multi-objective with decision-makers unable to specify a priori which trade-off between the conflicting objectives is preferable.
We propose a novel learning approach to estimate the Pareto front by maximizing the dominated hypervolume (HV) of the average loss vectors corresponding to a set of learners.
In our approach, the set of learners are trained multi-objectively with a dynamic loss function, wherein each learner's losses are weighted by their HV maximizing gradients.
Experiments on three different multi-objective tasks show that the outputs of the set of learners are indeed well-spread on
arXiv Detail & Related papers (2021-02-08T20:41:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.