FL_PyTorch: optimization research simulator for federated learning
- URL: http://arxiv.org/abs/2202.03099v1
- Date: Mon, 7 Feb 2022 12:18:28 GMT
- Title: FL_PyTorch: optimization research simulator for federated learning
- Authors: Konstantin Burlachenko, Samuel Horv\'ath, Peter Richt\'arik
- Abstract summary: Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared machine learning model.
FL_PyTorch is a suite of open-source software written in python that builds on top of one the most popular research Deep Learning (DL) framework PyTorch.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising technique for edge devices
to collaboratively learn a shared machine learning model while keeping training
data locally on the device, thereby removing the need to store and access the
full data in the cloud. However, FL is difficult to implement, test and deploy
in practice considering heterogeneity in common edge device settings, making it
fundamentally hard for researchers to efficiently prototype and test their
optimization algorithms. In this work, our aim is to alleviate this problem by
introducing FL_PyTorch : a suite of open-source software written in python that
builds on top of one the most popular research Deep Learning (DL) framework
PyTorch. We built FL_PyTorch as a research simulator for FL to enable fast
development, prototyping and experimenting with new and existing FL
optimization algorithms. Our system supports abstractions that provide
researchers with a sufficient level of flexibility to experiment with existing
and novel approaches to advance the state-of-the-art. Furthermore, FL_PyTorch
is a simple to use console system, allows to run several clients simultaneously
using local CPUs or GPU(s), and even remote compute devices without the need
for any distributed implementation provided by the user. FL_PyTorch also offers
a Graphical User Interface. For new methods, researchers only provide the
centralized implementation of their algorithm. To showcase the possibilities
and usefulness of our system, we experiment with several well-known
state-of-the-art FL algorithms and a few of the most common FL datasets.
Related papers
- Where is the Testbed for my Federated Learning Research? [3.910931245706272]
We present CoLExT, a real-world testbed for federated learning (FL) research.
CoLExT is designed to streamline experimentation with custom FL algorithms in a rich testbed configuration space.
Through an initial investigation involving popular FL algorithms running on CoLExT, we reveal previously unknown trade-offs, inefficiencies, and programming bugs.
arXiv Detail & Related papers (2024-07-19T09:34:04Z) - pfl-research: simulation framework for accelerating research in Private Federated Learning [6.421821657238535]
pfl-research is a fast, modular, and easy-to-use Python framework for simulating Federated learning (FL)
It supports setups, PyTorch, and non-neural network models, and is tightly integrated with state-of-the-art algorithms.
We release a suite of benchmarks that evaluates an algorithm's overall performance on a diverse set of realistic scenarios.
arXiv Detail & Related papers (2024-04-09T16:23:01Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - TorchFL: A Performant Library for Bootstrapping Federated Learning
Experiments [4.075095403704456]
We introduce TorchFL, a performant library for bootstrapping federated learning experiments.
TorchFL is built on a bottom-up design using PyTorch and Lightning.
Being built on a bottom-up design using PyTorch and Lightning, TorchFL provides ready-to-use abstractions for models, datasets, and FL algorithms.
arXiv Detail & Related papers (2022-11-01T20:31:55Z) - NVIDIA FLARE: Federated Learning from Simulation to Real-World [11.490933081543787]
We created NVIDIA FLARE as an open-source development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications.
The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches.
arXiv Detail & Related papers (2022-10-24T14:30:50Z) - FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings [51.09574369310246]
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models.
We propose a novel cross-silo dataset suite focused on healthcare, FLamby, to bridge the gap between theory and practice of cross-silo FL.
Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research.
arXiv Detail & Related papers (2022-10-10T12:17:30Z) - Flower: A Friendly Federated Learning Research Framework [18.54638343801354]
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model.
We present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments.
arXiv Detail & Related papers (2020-07-28T17:59:07Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.