Hierarchical and Decentralised Federated Learning
- URL: http://arxiv.org/abs/2304.14982v1
- Date: Fri, 28 Apr 2023 17:06:50 GMT
- Title: Hierarchical and Decentralised Federated Learning
- Authors: Omer Rana, Theodoros Spyridopoulos, Nathaniel Hudson, Matt Baughman,
Kyle Chard, Ian Foster, Aftab Khan
- Abstract summary: Hierarchical Federated Learning extends the traditional FL process to enable more efficient model aggregation.
It can improve performance and reduce costs, whilst also enabling FL to be deployed in environments not well-suited to traditional FL.
H-FL will be crucial to future FL solutions as it can aggregate and distribute models at multiple levels to optimally serve the trade-off between locality dependence and global anomaly robustness.
- Score: 3.055801139718484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning has shown enormous promise as a way of training ML models
in distributed environments while reducing communication costs and protecting
data privacy. However, the rise of complex cyber-physical systems, such as the
Internet-of-Things, presents new challenges that are not met with traditional
FL methods. Hierarchical Federated Learning extends the traditional FL process
to enable more efficient model aggregation based on application needs or
characteristics of the deployment environment (e.g., resource capabilities
and/or network connectivity). It illustrates the benefits of balancing
processing across the cloud-edge continuum. Hierarchical Federated Learning is
likely to be a key enabler for a wide range of applications, such as smart
farming and smart energy management, as it can improve performance and reduce
costs, whilst also enabling FL workflows to be deployed in environments that
are not well-suited to traditional FL. Model aggregation algorithms, software
frameworks, and infrastructures will need to be designed and implemented to
make such solutions accessible to researchers and engineers across a growing
set of domains.
H-FL also introduces a number of new challenges. For instance, there are
implicit infrastructural challenges. There is also a trade-off between having
generalised models and personalised models. If there exist geographical
patterns for data (e.g., soil conditions in a smart farm likely are related to
the geography of the region itself), then it is crucial that models used
locally can consider their own locality in addition to a globally-learned
model. H-FL will be crucial to future FL solutions as it can aggregate and
distribute models at multiple levels to optimally serve the trade-off between
locality dependence and global anomaly robustness.
Related papers
- Federated Learning in Practice: Reflections and Projections [17.445826363802997]
Federated Learning (FL) is a machine learning technique that enables multiple entities to collaboratively learn a shared model without exchanging their local data.
Production systems from organizations like Google, Apple, and Meta demonstrate the real-world applicability of FL.
We propose a redefined FL framework that prioritizes privacy principles rather than rigid definitions.
arXiv Detail & Related papers (2024-10-11T15:10:38Z) - Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework [1.4206132527980742]
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
We present the recent advances in developing APPFL, a framework and benchmarking suite for federated learning.
We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization.
arXiv Detail & Related papers (2024-09-17T22:20:26Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Fed-FSNet: Mitigating Non-I.I.D. Federated Learning via Fuzzy
Synthesizing Network [19.23943687834319]
Federated learning (FL) has emerged as a promising privacy-preserving distributed machine learning framework.
We propose a novel FL training framework, dubbed Fed-FSNet, using a properly designed Fuzzy Synthesizing Network (FSNet) to mitigate the Non-I.I.D. at-the-source issue.
arXiv Detail & Related papers (2022-08-21T18:40:51Z) - Introducing Federated Learning into Internet of Things ecosystems --
preliminary considerations [0.31402652384742363]
Federated learning (FL) was proposed to facilitate the training of models in a distributed environment.
It supports the protection of (local) data privacy and uses local resources for model training.
arXiv Detail & Related papers (2022-07-15T18:48:57Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.