pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning
- URL: http://arxiv.org/abs/2206.03655v2
- Date: Fri, 10 Jun 2022 03:22:57 GMT
- Title: pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning
- Authors: Daoyuan Chen, Dawei Gao, Weirui Kuang, Yaliang Li, Bolin Ding
- Abstract summary: We propose the first comprehensive pFL benchmark, pFL-Bench, for rapid, reproducible, standardized and thorough pFL evaluation.
The proposed benchmark contains more than 10 datasets in diverse application domains with unified data partition and realistic heterogeneous settings.
We highlight the benefits and potential of state-of-the-art pFL methods and hope pFL-Bench enables further pFL research and broad applications.
- Score: 42.819532536636835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized Federated Learning (pFL), which utilizes and deploys distinct
local models, has gained increasing attention in recent years due to its
success in handling the statistical heterogeneity of FL clients. However,
standardized evaluation and systematical analysis of diverse pFL methods remain
a challenge. Firstly, the highly varied datasets, FL simulation settings and
pFL implementations prevent fast and fair comparisons of pFL methods. Secondly,
the effectiveness and robustness of pFL methods are under-explored in various
practical scenarios, such as new clients generalization and resource-limited
clients participation. Finally, the current pFL literature diverges in the
adopted evaluation and ablation protocols. To tackle these challenges, we
propose the first comprehensive pFL benchmark, pFL-Bench, for facilitating
rapid, reproducible, standardized and thorough pFL evaluation. The proposed
benchmark contains more than 10 datasets in diverse application domains with
unified data partition and realistic heterogeneous settings; a modular and
easy-to-extend pFL codebase with more than 20 competitive pFL baseline
implementations; and systematic evaluations under containerized environments in
terms of generalization, fairness, system overhead, and convergence. We
highlight the benefits and potential of state-of-the-art pFL methods and hope
pFL-Bench enables further pFL research and broad applications that would
otherwise be difficult owing to the absence of a dedicated benchmark. The code
is released at
https://github.com/alibaba/FederatedScope/tree/master/benchmark/pFL-Bench.
Related papers
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - Sharp Bounds for Sequential Federated Learning on Heterogeneous Data [5.872735527071425]
There are two paradigms in Learning (FL): parallel FL (PFL) and sequential FL (SFL)
In contrast to that of PFL, convergence theory SFL data is still lacking.
We derive the upper bounds for strongly convex, general convex and sequential non-counterintuitive objective functions.
We compare the upper bounds SFL with those on heterogeneous PFL data.
arXiv Detail & Related papers (2024-05-02T09:58:49Z) - PFLlib: Personalized Federated Learning Algorithm Library [27.954706790789434]
PFLlib is a comprehensive pFL algorithm library with an integrated evaluation platform.
We implement 34 state-of-the-art FL algorithms, including 7 classic tFL algorithms and 27 pFL algorithms.
PFLlib has already gained 850 stars and 199 forks on GitHub.
arXiv Detail & Related papers (2023-12-08T12:03:08Z) - Convergence Analysis of Sequential Federated Learning on Heterogeneous Data [5.872735527071425]
There are two categories of methods in Federated Learning (FL) for joint training across multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) FL (SFL) where clients train in a sequential manner.
In this paper, we establish the convergence guarantees SFL on heterogeneous data is still lacking.
Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
arXiv Detail & Related papers (2023-11-06T14:48:51Z) - Bayesian Federated Learning: A Survey [54.40136267717288]
Federated learning (FL) demonstrates its advantages in integrating distributed infrastructure, communication, computing and learning in a privacy-preserving manner.
The robustness and capabilities of existing FL methods are challenged by limited and dynamic data and conditions.
BFL has emerged as a promising approach to address these issues.
arXiv Detail & Related papers (2023-04-26T03:41:17Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - EasyFL: A Low-code Federated Learning Platform For Dummies [21.984721627569783]
We propose the first low-code Federated Learning (FL) platform, EasyFL, to enable users with various levels of expertise to experiment and prototype FL applications with little coding.
With only a few lines of code, EasyFL empowers them with many out-of-the-box functionalities to accelerate experimentation and deployment.
Our implementations show that EasyFL requires only three lines of code to build a vanilla FL application, at least 10x lesser than other platforms.
arXiv Detail & Related papers (2021-05-17T04:15:55Z) - Towards Personalized Federated Learning [20.586573091790665]
We present a unique taxonomy dividing PFL techniques into data-based and model-based approaches.
We highlight their key ideas, and envision promising future trajectories of research towards new PFL architectural design.
arXiv Detail & Related papers (2021-03-01T02:45:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.