From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks
- URL: http://arxiv.org/abs/2006.03594v3
- Date: Fri, 23 Oct 2020 14:42:05 GMT
- Title: From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks
- Authors: Seyyedali Hosseinalipour and Christopher G. Brinton and Vaneet
Aggarwal and Huaiyu Dai and Mung Chiang
- Abstract summary: Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
- Score: 71.23327876898816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) tasks are becoming ubiquitous in today's network
applications. Federated learning has emerged recently as a technique for
training ML models at the network edge by leveraging processing capabilities
across the nodes that collect the data. There are several challenges with
employing conventional federated learning in contemporary networks, due to the
significant heterogeneity in compute and communication capabilities that exist
across devices. To address this, we advocate a new learning paradigm called fog
learning which will intelligently distribute ML model training across the
continuum of nodes from edge devices to cloud servers. Fog learning enhances
federated learning along three major dimensions: network, heterogeneity, and
proximity. It considers a multi-layer hybrid learning framework consisting of
heterogeneous devices with various proximities. It accounts for the topology
structures of the local networks among the heterogeneous nodes at each network
layer, orchestrating them for collaborative/cooperative learning through
device-to-device (D2D) communications. This migrates from star network
topologies used for parameter transfers in federated learning to more
distributed topologies at scale. We discuss several open research directions to
realizing fog learning.
Related papers
- Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - UAV-Aided Decentralized Learning over Mesh Networks [23.612400109629544]
Decentralized learning empowers wireless network devices to collaboratively train a machine learning (ML) model relying solely on device-to-device (D2D) communication.
Local connectivity of real world mesh networks, due to the limited communication range of its wireless nodes, undermines the efficiency of decentralized learning protocols.
We propose an optimized UAV trajectory, that is defined as a sequence of waypoints that the UAV visits sequentially in order to transfer intelligence across sparsely connected group of users.
arXiv Detail & Related papers (2022-03-02T10:39:40Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Multi-Stage Hybrid Federated Learning over Large-Scale D2D-Enabled Fog
Networks [61.30171206892684]
We develop a hybrid of intra- and inter-layer model learning that considers the network as a multi-layer cluster-based structure.
MH-FL considers the topology structures among the nodes in the clusters, including local networks formed via device-to-device (D2D) communications.
It orchestrates the devices at different network layers in a collaborative/cooperative manner to form local consensus on the model parameters.
arXiv Detail & Related papers (2020-07-18T20:03:07Z) - Distributed Learning on Heterogeneous Resource-Constrained Devices [3.6187468775839373]
We consider a distributed system, consisting of a heterogeneous set of devices, ranging from low-end to high-end.
We propose the first approach that enables distributed learning in such a heterogeneous system.
Applying our approach, each device employs a neural network (NN) with a topology that fits its capabilities; however, part of these NNs share the same topology, so that their parameters can be jointly learned.
arXiv Detail & Related papers (2020-06-09T16:58:49Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.