Collaborative Learning with a Drone Orchestrator
- URL: http://arxiv.org/abs/2303.02266v2
- Date: Thu, 10 Aug 2023 18:55:20 GMT
- Title: Collaborative Learning with a Drone Orchestrator
- Authors: Mahdi Boloursaz Mashhadi, Mahnoosh Mahdavimoghadam, Rahim Tafazolli,
Walid Saad
- Abstract summary: A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
- Score: 79.75113006257872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the problem of drone-assisted collaborative learning is
considered. In this scenario, swarm of intelligent wireless devices train a
shared neural network (NN) model with the help of a drone. Using its sensors,
each device records samples from its environment to gather a local dataset for
training. The training data is severely heterogeneous as various devices have
different amount of data and sensor noise level. The intelligent devices
iteratively train the NN on their local datasets and exchange the model
parameters with the drone for aggregation. For this system, the convergence
rate of collaborative learning is derived while considering data heterogeneity,
sensor noise levels, and communication errors, then, the drone trajectory that
maximizes the final accuracy of the trained NN is obtained. The proposed
trajectory optimization approach is aware of both the devices data
characteristics (i.e., local dataset size and noise level) and their wireless
channel conditions, and significantly improves the convergence rate and final
accuracy in comparison with baselines that only consider data characteristics
or channel conditions. Compared to state-of-the-art baselines, the proposed
approach achieves an average 3.85% and 3.54% improvement in the final accuracy
of the trained NN on benchmark datasets for image recognition and semantic
segmentation tasks, respectively. Moreover, the proposed framework achieves a
significant speedup in training, leading to an average 24% and 87% saving in
the drone hovering time, communication overhead, and battery usage,
respectively for these tasks.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - An Efficient Privacy-aware Split Learning Framework for Satellite Communications [33.608696987158424]
We propose a novel framework for more efficient SL in satellite communications.
Our approach, Dynamic Topology Informed Pruning, combines differential privacy with graph and model pruning to optimize graph neural networks for distributed learning.
Our framework not only significantly improves the operational efficiency of satellite communications but also establishes a new benchmark in privacy-aware distributed learning.
arXiv Detail & Related papers (2024-09-13T04:59:35Z) - Robust Low-Cost Drone Detection and Classification in Low SNR Environments [0.9087641068861043]
We evaluate various convolutional neural networks (CNNs) for their ability to detect and classify drones.
We demonstrate a low-cost drone detection system using a standard computer, software-defined radio (SDR) and antenna.
arXiv Detail & Related papers (2024-06-26T12:50:55Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - On Addressing Heterogeneity in Federated Learning for Autonomous
Vehicles Connected to a Drone Orchestrator [32.61132332561498]
We envision a federated learning (FL) scenario in service of amending the performance of autonomous road vehicles.
We focus on the issue of accelerating the learning of a particular class of critical object (CO), that may harm the nominal operation of an autonomous vehicle.
arXiv Detail & Related papers (2021-08-05T16:25:48Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Towards Efficient Scheduling of Federated Mobile Devices under
Computational and Statistical Heterogeneity [16.069182241512266]
This paper studies the implementation of distributed learning on mobile devices.
We use data as a tuning knob and propose two efficient-time algorithms to schedule different workloads.
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup-wise, 2-7% accuracy gain and convergence rate by more than 100% on CIFAR10.
arXiv Detail & Related papers (2020-05-25T18:21:51Z) - D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless
Network Edge [6.721448732174383]
Mobile edge learning is an emerging technique that enables distributed edge devices to collaborate in training shared machine learning models.
This paper proposes a new device to device enabled data sharing approach, in which different edge devices share their data samples among each other over communication links.
Numerical results show that the proposed data sharing design significantly reduces the training delay, and also enhances the training accuracy when the data samples are non independent and identically distributed among edge devices.
arXiv Detail & Related papers (2020-01-28T01:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.