Federated Learning with Downlink Device Selection
- URL: http://arxiv.org/abs/2107.03510v1
- Date: Wed, 7 Jul 2021 22:42:39 GMT
- Title: Federated Learning with Downlink Device Selection
- Authors: Mohammad Mohammadi Amiri, Sanjeev R. Kulkarni, H. Vincent Poor
- Abstract summary: We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
- Score: 92.14944020945846
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We study federated edge learning, where a global model is trained
collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the
wireless edge devices for training using their private local data. The devices
then transmit their local model updates, which are used to update the global
model, to the PS. The algorithm, which involves transmission over PS-to-device
and device-to-PS links, continues until the convergence of the global model or
lack of any participating devices. In this study, we consider device selection
based on downlink channels over which the PS shares the global model with the
devices. Performing digital downlink transmission, we design a partial device
participation framework where a subset of the devices is selected for training
at each iteration. Therefore, the participating devices can have a better
estimate of the global model compared to the full device participation case
which is due to the shared nature of the broadcast channel with the price of
updating the global model with respect to a smaller set of data. At each
iteration, the PS broadcasts different quantized global model updates to
different participating devices based on the last global model estimates
available at the devices. We investigate the best number of participating
devices through experimental results for image classification using the MNIST
dataset with biased distribution.
Related papers
- Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification [4.674584508653125]
Federated learning (FL) enables multiple devices to collaboratively train a global model while maintaining data on local servers.
We propose an FL approach using few-shot learning and aggregation of the model weights on a global server.
An exemplary application of FL is orchestrating machine learning models along highways for interference classification based on snapshots from global navigation satellite system (GNSS) receivers.
arXiv Detail & Related papers (2024-10-21T06:43:04Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - A New Distributed Method for Training Generative Adversarial Networks [22.339672721348382]
This paper proposes a new framework for training GANs in a distributed fashion.
Each device computes a local discriminator using local data; a single server aggregates their results and computes a global GAN.
Numerical results obtained using three popular datasets demonstrate that the proposed framework can outperform a state-of-the-art framework in terms of convergence speed.
arXiv Detail & Related papers (2021-07-19T08:38:10Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z) - Convergence of Update Aware Device Scheduling for Federated Learning at
the Wireless Edge [84.55126371346452]
We study federated learning at the wireless edge, where power-limited devices with local datasets collaboratively train a joint model with the help of a remote parameter server (PS)
We design novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round.
The results of numerical experiments show that the proposed scheduling policy, based on both the channel conditions and the significance of the local model updates, provides a better long-term performance than scheduling policies based only on either of the two metrics individually.
arXiv Detail & Related papers (2020-01-28T15:15:22Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.