Voronoi-grid-based Pareto Front Learning and Its Application to Collaborative Federated Learning
- URL: http://arxiv.org/abs/2505.20648v1
- Date: Tue, 27 May 2025 02:53:14 GMT
- Title: Voronoi-grid-based Pareto Front Learning and Its Application to Collaborative Federated Learning
- Authors: Mengmeng Chen, Xiaohu Wu, Qiqi Liu, Tiantian He, Yew-Soon Ong, Yaochu Jin, Qicheng Lao, Han Yu,
- Abstract summary: We introduce PHN-HVVS, which decomposes the design space into Voronoi grids and deploys a genetic algorithm for Voronoi grid partitioning within high-dimensional space.<n>Results on multiple MOO machine learning tasks demonstrate that PHN-HVVS outperforms the baselines significantly.
- Score: 45.66663796692696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective optimization (MOO) exists extensively in machine learning, and aims to find a set of Pareto-optimal solutions, called the Pareto front, e.g., it is fundamental for multiple avenues of research in federated learning (FL). Pareto-Front Learning (PFL) is a powerful method implemented using Hypernetworks (PHNs) to approximate the Pareto front. This method enables the acquisition of a mapping function from a given preference vector to the solutions on the Pareto front. However, most existing PFL approaches still face two challenges: (a) sampling rays in high-dimensional spaces; (b) failing to cover the entire Pareto Front which has a convex shape. Here, we introduce a novel PFL framework, called as PHN-HVVS, which decomposes the design space into Voronoi grids and deploys a genetic algorithm (GA) for Voronoi grid partitioning within high-dimensional space. We put forward a new loss function, which effectively contributes to more extensive coverage of the resultant Pareto front and maximizes the HV Indicator. Experimental results on multiple MOO machine learning tasks demonstrate that PHN-HVVS outperforms the baselines significantly in generating Pareto front. Also, we illustrate that PHN-HVVS advances the methodologies of several recent problems in the FL field. The code is available at https://github.com/buptcmm/phnhvvs}{https://github.com/buptcmm/phnhvvs.
Related papers
- Pareto Front Shape-Agnostic Pareto Set Learning in Multi-Objective Optimization [6.810571151954673]
Existing methods rely on the mapping of preference vectors in the objective space to optimal solutions in the decision space.
Our proposed method can handle any shape of the Pareto front and learn the Pareto set without requiring prior knowledge.
arXiv Detail & Related papers (2024-08-11T14:09:40Z) - Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [49.14535254003683]
We introduce PaLoRA, a novel parameter-efficient method that addresses multi-task trade-offs in machine learning.<n>Our experiments show that PaLoRA outperforms state-of-the-art MTL and PFL baselines across various datasets.
arXiv Detail & Related papers (2024-07-10T21:25:51Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - A Hyper-Transformer model for Controllable Pareto Front Learning with
Split Feasibility Constraints [2.07180164747172]
We develop a hyper-transformer (Hyper-Trans) model for CPFL with SFC.
We show that the Hyper-Trans model makes MED errors smaller in computational experiments than the Hyper-MLP model.
arXiv Detail & Related papers (2024-02-04T10:21:03Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Improving Pareto Front Learning via Multi-Sample Hypernetworks [4.129225533930966]
We propose a novel PFL framework namely PHN-HVI, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences.
The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines.
arXiv Detail & Related papers (2022-12-02T12:19:12Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Learning the Pareto Front with Hypernetworks [44.72371822514582]
Multi-objective optimization (MOO) problems are prevalent in machine learning.
These problems have a set of optimal solutions, where each point on the front represents a different trade-off between possibly conflicting objectives.
Recent MOO methods can target a specific desired ray in loss space however, most approaches still face two grave limitations.
arXiv Detail & Related papers (2020-10-08T16:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.