Distributed Machine Learning in D2D-Enabled Heterogeneous Networks:
Architectures, Performance, and Open Challenges
- URL: http://arxiv.org/abs/2206.01906v2
- Date: Sat, 4 Nov 2023 11:44:24 GMT
- Title: Distributed Machine Learning in D2D-Enabled Heterogeneous Networks:
Architectures, Performance, and Open Challenges
- Authors: Zhipeng Cheng, Xuwei Fan, Minghui Liwang, Ning Chen, Xiaoyu Xia,
Xianbin Wang
- Abstract summary: This article introduces two innovative hybrid distributed machine learning architectures, namely, hybrid split FL (HSFL) and hybrid federated SL (HFSL)
HSFL and HFSL combine the strengths of both FL and SL in D2D-enabled heterogeneous wireless networks.
Our simulations reveal notable reductions in communication/computation costs and training delays as compared to conventional FL and SL.
- Score: 12.62400578837111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ever-growing concerns regarding data privacy have led to a paradigm shift
in machine learning (ML) architectures from centralized to distributed
approaches, giving rise to federated learning (FL) and split learning (SL) as
the two predominant privacy-preserving ML mechanisms. However,implementing FL
or SL in device-to-device (D2D)-enabled heterogeneous networks with diverse
clients presents substantial challenges, including architecture scalability and
prolonged training delays. To address these challenges, this article introduces
two innovative hybrid distributed ML architectures, namely, hybrid split FL
(HSFL) and hybrid federated SL (HFSL). Such architectures combine the strengths
of both FL and SL in D2D-enabled heterogeneous wireless networks. We provide a
comprehensive analysis of the performance and advantages of HSFL and HFSL,
while also highlighting open challenges for future exploration. We support our
proposals with preliminary simulations using three datasets in non-independent
and non-identically distributed settings, demonstrating the feasibility of our
architectures. Our simulations reveal notable reductions in
communication/computation costs and training delays as compared to conventional
FL and SL.
Related papers
- FedConv: Enhancing Convolutional Neural Networks for Handling Data
Heterogeneity in Federated Learning [34.37155882617201]
Federated learning (FL) is an emerging paradigm in machine learning, where a shared model is collaboratively learned using data from multiple devices.
We systematically investigate the impact of different architectural elements, such as activation functions and normalization layers, on the performance within heterogeneous FL.
Our findings indicate that with strategic architectural modifications, pure CNNs can achieve a level of robustness that either matches or even exceeds that of ViTs.
arXiv Detail & Related papers (2023-10-06T17:57:50Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Synergies Between Federated Learning and O-RAN: Towards an Elastic
Virtualized Architecture for Multiple Distributed Machine Learning Services [7.477830365234231]
We introduce a generic FL paradigm over NextG networks, called dynamic multi-service FL (DMS-FL)
We propose a novel distributed ML architecture called elastic FL (EV-FL)
arXiv Detail & Related papers (2023-04-14T19:21:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Federated Learning for Physical Layer Design [38.46522285374866]
Federated learning (FL) has been proposed recently as a distributed learning scheme.
FL is more communication-efficient and privacy-preserving than centralized learning (CL)
This article discusses the recent advances in FL-based training for physical layer design problems.
arXiv Detail & Related papers (2021-02-23T16:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.