FS-Real: Towards Real-World Cross-Device Federated Learning
- URL: http://arxiv.org/abs/2303.13363v1
- Date: Thu, 23 Mar 2023 15:37:17 GMT
- Title: FS-Real: Towards Real-World Cross-Device Federated Learning
- Authors: Daoyuan Chen, Dawei Gao, Yuexiang Xie, Xuchen Pan, Zitao Li, Yaliang
Li, Bolin Ding, Jingren Zhou
- Abstract summary: Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
- Score: 60.91678132132229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) aims to train high-quality models in collaboration
with distributed clients while not uploading their local data, which attracts
increasing attention in both academia and industry. However, there is still a
considerable gap between the flourishing FL research and real-world scenarios,
mainly caused by the characteristics of heterogeneous devices and its scales.
Most existing works conduct evaluations with homogeneous devices, which are
mismatched with the diversity and variability of heterogeneous devices in
real-world scenarios. Moreover, it is challenging to conduct research and
development at scale with heterogeneous devices due to limited resources and
complex software stacks. These two key factors are important yet underexplored
in FL research as they directly impact the FL training dynamics and final
performance, making the effectiveness and usability of FL algorithms unclear.
To bridge the gap, in this paper, we propose an efficient and scalable
prototyping system for real-world cross-device FL, FS-Real. It supports
heterogeneous device runtime, contains parallelism and robustness enhanced FL
server, and provides implementations and extensibility for advanced FL utility
features such as personalization, communication compression and asynchronous
aggregation. To demonstrate the usability and efficiency of FS-Real, we conduct
extensive experiments with various device distributions, quantify and analyze
the effect of the heterogeneous device and various scales, and further provide
insights and open discussions about real-world FL scenarios. Our system is
released to help to pave the way for further real-world FL research and broad
applications involving diverse devices and scales.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and
Insights [52.024964564408]
This paper examines the added-value of implementing Federated Learning throughout all levels of the protocol stack.
It presents important FL applications, addresses hot topics, provides valuable insights and explicits guidance for future research and developments.
Our concluding remarks aim to leverage the synergy between FL and future 6G, while highlighting FL's potential to revolutionize wireless industry.
arXiv Detail & Related papers (2023-12-07T20:39:57Z) - An Empirical Study of Federated Learning on IoT-Edge Devices: Resource
Allocation and Heterogeneity [2.055204980188575]
Federated Learning (FL) is a distributed approach in which a single server and multiple clients collaboratively build an ML model without moving data away from clients.
In this study, we systematically conduct extensive experiments on a large network of IoT and edge devices (called IoT-Edge devices) to present FL real-world characteristics.
arXiv Detail & Related papers (2023-05-31T13:16:07Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - FLAME: Federated Learning Across Multi-device Environments [9.810211000961647]
Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.
We propose FLAME, a user-centered FL training approach to counter statistical and system heterogeneity in multi-device environments.
Our experiment results show that FLAME outperforms various baselines by 4.8-33.8% higher F-1 score, 1.02-2.86x greater energy efficiency, and up to 2.02x speedup in convergence.
arXiv Detail & Related papers (2022-02-17T22:23:56Z) - On the Impact of Device and Behavioral Heterogeneity in Federated
Learning [5.038980064083677]
Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.
This paper describes the challenge of performing training over largely heterogeneous datasets, devices, and networks.
We conduct an empirical study spanning close to 1.5K unique configurations on five popular FL benchmarks.
arXiv Detail & Related papers (2021-02-15T12:04:38Z) - Flower: A Friendly Federated Learning Research Framework [18.54638343801354]
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model.
We present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments.
arXiv Detail & Related papers (2020-07-28T17:59:07Z) - Characterizing Impacts of Heterogeneity in Federated Learning upon
Large-Scale Smartphone Data [23.67491703843822]
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in academia and industry.
A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices.
We conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under Heterogeneous and Heterogeneous-unaware settings.
arXiv Detail & Related papers (2020-06-12T07:49:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.