Flower: A Friendly Federated Learning Research Framework
- URL: http://arxiv.org/abs/2007.14390v5
- Date: Sat, 5 Mar 2022 20:30:32 GMT
- Title: Flower: A Friendly Federated Learning Research Framework
- Authors: Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier
Fernandez-Marques, Yan Gao, Lorenzo Sani, Kwing Hei Li, Titouan Parcollet,
Pedro Porto Buarque de Gusm\~ao, Nicholas D. Lane
- Abstract summary: Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model.
We present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments.
- Score: 18.54638343801354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a promising technique for edge devices
to collaboratively learn a shared prediction model, while keeping their
training data on the device, thereby decoupling the ability to do machine
learning from the need to store the data in the cloud. However, FL is difficult
to implement realistically, both in terms of scale and systems heterogeneity.
Although there are a number of research frameworks available to simulate FL
algorithms, they do not support the study of scalable FL workloads on
heterogeneous edge devices.
In this paper, we present Flower -- a comprehensive FL framework that
distinguishes itself from existing platforms by offering new facilities to
execute large-scale FL experiments and consider richly heterogeneous FL device
scenarios. Our experiments show Flower can perform FL experiments up to 15M in
client size using only a pair of high-end GPUs. Researchers can then seamlessly
migrate experiments to real devices to examine other parts of the design space.
We believe Flower provides the community with a critical new tool for FL study
and development.
Related papers
- pfl-research: simulation framework for accelerating research in Private Federated Learning [6.421821657238535]
pfl-research is a fast, modular, and easy-to-use Python framework for simulating Federated learning (FL)
It supports setups, PyTorch, and non-neural network models, and is tightly integrated with state-of-the-art algorithms.
We release a suite of benchmarks that evaluates an algorithm's overall performance on a diverse set of realistic scenarios.
arXiv Detail & Related papers (2024-04-09T16:23:01Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - An Empirical Study of Federated Learning on IoT-Edge Devices: Resource
Allocation and Heterogeneity [2.055204980188575]
Federated Learning (FL) is a distributed approach in which a single server and multiple clients collaboratively build an ML model without moving data away from clients.
In this study, we systematically conduct extensive experiments on a large network of IoT and edge devices (called IoT-Edge devices) to present FL real-world characteristics.
arXiv Detail & Related papers (2023-05-31T13:16:07Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings [51.09574369310246]
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models.
We propose a novel cross-silo dataset suite focused on healthcare, FLamby, to bridge the gap between theory and practice of cross-silo FL.
Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research.
arXiv Detail & Related papers (2022-10-10T12:17:30Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - FL_PyTorch: optimization research simulator for federated learning [1.6114012813668934]
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared machine learning model.
FL_PyTorch is a suite of open-source software written in python that builds on top of one the most popular research Deep Learning (DL) framework PyTorch.
arXiv Detail & Related papers (2022-02-07T12:18:28Z) - On-device Federated Learning with Flower [22.719117235237036]
Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device.
Despite the algorithmic advancements in FL, the support for on-device training of FL algorithms on edge devices remains poor.
We present an exploration of on-device FL on various smartphones and embedded devices using the Flower framework.
arXiv Detail & Related papers (2021-04-07T10:42:14Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.