D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless
Network Edge
- URL: http://arxiv.org/abs/2001.11342v1
- Date: Tue, 28 Jan 2020 01:49:09 GMT
- Title: D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless
Network Edge
- Authors: Xiaoran Cai, Xiaopeng Mo, Junyang Chen, and Jie Xu
- Abstract summary: Mobile edge learning is an emerging technique that enables distributed edge devices to collaborate in training shared machine learning models.
This paper proposes a new device to device enabled data sharing approach, in which different edge devices share their data samples among each other over communication links.
Numerical results show that the proposed data sharing design significantly reduces the training delay, and also enhances the training accuracy when the data samples are non independent and identically distributed among edge devices.
- Score: 6.721448732174383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile edge learning is an emerging technique that enables distributed edge
devices to collaborate in training shared machine learning models by exploiting
their local data samples and communication and computation resources. To deal
with the straggler dilemma issue faced in this technique, this paper proposes a
new device to device enabled data sharing approach, in which different edge
devices share their data samples among each other over communication links, in
order to properly adjust their computation loads for increasing the training
speed. Under this setup, we optimize the radio resource allocation for both
data sharing and distributed training, with the objective of minimizing the
total training delay under fixed numbers of local and global iterations.
Numerical results show that the proposed data sharing design significantly
reduces the training delay, and also enhances the training accuracy when the
data samples are non independent and identically distributed among edge
devices.
Related papers
- Unsupervised Federated Optimization at the Edge: D2D-Enabled Learning without Labels [14.696896223432507]
Federated learning (FL) is a popular solution for distributed machine learning (ML)
tt CF-CL employs local device cooperation where either explicit (i.e., raw data) or implicit (i.e., embeddings) information is exchanged through device-to-device (D2D) communications.
arXiv Detail & Related papers (2024-04-15T15:17:38Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Semi-Decentralized Federated Edge Learning with Data and Device
Heterogeneity [6.341508488542275]
Federated edge learning (FEEL) has attracted much attention as a privacy-preserving paradigm to effectively incorporate the distributed data at the network edge for training deep learning models.
In this paper, we investigate a novel framework of FEEL, namely semi-decentralized federated edge learning (SD-FEEL), where multiple edge servers are employed to collectively coordinate a large number of client nodes.
By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.
arXiv Detail & Related papers (2021-12-20T03:06:08Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Jointly Learning from Decentralized (Federated) and Centralized Data to
Mitigate Distribution Shift [2.9965560298318468]
Federated Learning (FL) is an increasingly used paradigm where learning takes place collectively on edge devices.
Yet a distribution shift may still exist; the on-device training examples may lack for some data inputs expected to be encountered at inference time.
This paper proposes a way to mitigate this shift: selective usage of datacenter data, mixed in with FL.
arXiv Detail & Related papers (2021-11-23T20:51:24Z) - Towards Efficient Scheduling of Federated Mobile Devices under
Computational and Statistical Heterogeneity [16.069182241512266]
This paper studies the implementation of distributed learning on mobile devices.
We use data as a tuning knob and propose two efficient-time algorithms to schedule different workloads.
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup-wise, 2-7% accuracy gain and convergence rate by more than 100% on CIFAR10.
arXiv Detail & Related papers (2020-05-25T18:21:51Z) - An On-Device Federated Learning Approach for Cooperative Model Update
between Edge Devices [2.99321624683618]
A neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model.
In this paper, we focus on OS-ELM to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection.
We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices.
arXiv Detail & Related papers (2020-02-27T18:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.