Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks
- URL: http://arxiv.org/abs/2303.08361v1
- Date: Wed, 15 Mar 2023 04:41:36 GMT
- Title: Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks
- Authors: Su Wang, Seyyedali Hosseinalipour, Vaneet Aggarwal, Christopher G.
Brinton, David J. Love, Weifeng Su, and Mung Chiang
- Abstract summary: Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
- Score: 49.19502459827366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been promoted as a popular technique for training
machine learning (ML) models over edge/fog networks. Traditional
implementations of FL have largely neglected the potential for inter-network
cooperation, treating edge/fog devices and other infrastructure participating
in ML as separate processing elements. Consequently, FL has been vulnerable to
several dimensions of network heterogeneity, such as varying computation
capabilities, communication resources, data qualities, and privacy demands. We
advocate for cooperative federated learning (CFL), a cooperative edge/fog ML
paradigm built on device-to-device (D2D) and device-to-server (D2S)
interactions. Through D2D and D2S cooperation, CFL counteracts network
heterogeneity in edge/fog networks through enabling a model/data/resource
pooling mechanism, which will yield substantial improvements in ML model
training quality and network resource consumption. We propose a set of core
methodologies that form the foundation of D2D and D2S cooperation and present
preliminary experiments that demonstrate their benefits. We also discuss new FL
functionalities enabled by this cooperative framework such as the integration
of unlabeled data and heterogeneous device privacy into ML model training.
Finally, we describe some open research directions at the intersection of
cooperative edge/fog and FL.
Related papers
- Device Sampling and Resource Optimization for Federated Learning in Cooperative Edge Networks [17.637761046608]
Federated learning (FedL) distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.
FedL ignores two important characteristics of contemporary wireless networks: (i) the network may contain heterogeneous communication/computation resources, and (ii) there may be significant overlaps in devices' local data distributions.
We develop a novel optimization methodology that jointly accounts for these factors via intelligent device sampling complemented by device-to-device (D2D) offloading.
arXiv Detail & Related papers (2023-11-07T21:17:59Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Distributed Machine Learning in D2D-Enabled Heterogeneous Networks:
Architectures, Performance, and Open Challenges [12.62400578837111]
This article introduces two innovative hybrid distributed machine learning architectures, namely, hybrid split FL (HSFL) and hybrid federated SL (HFSL)
HSFL and HFSL combine the strengths of both FL and SL in D2D-enabled heterogeneous wireless networks.
Our simulations reveal notable reductions in communication/computation costs and training delays as compared to conventional FL and SL.
arXiv Detail & Related papers (2022-06-04T04:20:51Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Device Sampling for Heterogeneous Federated Learning: Theory,
Algorithms, and Implementation [24.084053136210027]
We develop a sampling methodology based on graph sequential convolutional networks (GCNs)
We find that our methodology while sampling less than 5% of all devices outperforms conventional federated learning (FedL) substantially both in terms of trained model accuracy and required resource utilization.
arXiv Detail & Related papers (2021-01-04T05:59:50Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.