Communication Efficient Distributed Learning over Wireless Channels
- URL: http://arxiv.org/abs/2209.01682v1
- Date: Sun, 4 Sep 2022 19:41:21 GMT
- Title: Communication Efficient Distributed Learning over Wireless Channels
- Authors: Idan Achituve and Wenbo Wang and Ethan Fetaya and Amir Leshem
- Abstract summary: Vertical distributed learning exploits the local features collected by multiple learning workers to form a better global model.
We propose a novel hierarchical distributed learning framework, where each worker separately learns a low-dimensional embedding of their local observed data.
We show that the proposed learning framework is able to achieve almost the same model accuracy as the learning model using the concatenation of all the raw outputs from the learning workers.
- Score: 35.90632878033643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical distributed learning exploits the local features collected by
multiple learning workers to form a better global model. However, the exchange
of data between the workers and the model aggregator for parameter training
incurs a heavy communication burden, especially when the learning system is
built upon capacity-constrained wireless networks. In this paper, we propose a
novel hierarchical distributed learning framework, where each worker separately
learns a low-dimensional embedding of their local observed data. Then, they
perform communication efficient distributed max-pooling for efficiently
transmitting the synthesized input to the aggregator. For data exchange over a
shared wireless channel, we propose an opportunistic carrier sensing-based
protocol to implement the max-pooling operation for the output data from all
the learning workers. Our simulation experiments show that the proposed
learning framework is able to achieve almost the same model accuracy as the
learning model using the concatenation of all the raw outputs from the learning
workers, while requiring a communication load that is independent of the number
of workers.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Parameterization of Deep Learning Models for Federated Learning [85.82002651944254]
Federated Learning offers a way to train deep neural networks in a distributed fashion.
It incurs a communication overhead as the model parameters or gradients need to be exchanged regularly during training.
In this paper, we propose to utilise parallel Adapters for Federated Learning.
arXiv Detail & Related papers (2023-02-06T17:30:33Z) - Decentralized Learning with Multi-Headed Distillation [12.90857834791378]
Decentralized learning with private data is a central problem in machine learning.
We propose a novel distillation-based decentralized learning technique that allows multiple agents with private non-iid data to learn from each other.
arXiv Detail & Related papers (2022-11-28T21:01:43Z) - Collaborative Learning of Distributions under Heterogeneity and
Communication Constraints [35.82172666266493]
In machine learning, users often have to collaborate to learn distributions that generate the data.
We propose a novel two-stage method named SHIFT: First, the users collaborate by communicating with the server to learn a central distribution.
Then, the learned central distribution is fine-tuned to estimate the individual distributions of users.
arXiv Detail & Related papers (2022-06-01T18:43:06Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Communication-Efficient Federated Learning via Predictive Coding [38.778944321534084]
Federated learning can enable remote workers to collaboratively train a shared machine learning model.
The communication overhead is a critical bottleneck due to limited power and bandwidth.
We propose a predictive coding based communication scheme for federated learning.
arXiv Detail & Related papers (2021-08-02T14:12:19Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information [55.866673486753115]
We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
arXiv Detail & Related papers (2020-12-15T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.