FEDZIP: A Compression Framework for Communication-Efficient Federated
Learning
- URL: http://arxiv.org/abs/2102.01593v1
- Date: Tue, 2 Feb 2021 16:33:44 GMT
- Title: FEDZIP: A Compression Framework for Communication-Efficient Federated
Learning
- Authors: Amirhossein Malekijoo, Mohammad Javad Fadaeieslam, Hanieh Malekijou,
Morteza Homayounfar, Farshid Alizadeh-Shabdiz, Reza Rawassizadeh
- Abstract summary: Federated Learning is an implementation of decentralized machine learning for wireless devices.
It assigns the learning process independently to each client.
We propose a novel framework, FedZip, that significantly decreases the size of updates while transferring weights from the deep learning model between clients and their servers.
- Score: 2.334824705384299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning marks a turning point in the implementation of
decentralized machine learning (especially deep learning) for wireless devices
by protecting users' privacy and safeguarding raw data from third-party access.
It assigns the learning process independently to each client. First, clients
locally train a machine learning model based on local data. Next, clients
transfer local updates of model weights and biases (training data) to a server.
Then, the server aggregates updates (received from clients) to create a global
learning model. However, the continuous transfer between clients and the server
increases communication costs and is inefficient from a resource utilization
perspective due to the large number of parameters (weights and biases) used by
deep learning models. The cost of communication becomes a greater concern when
the number of contributing clients and communication rounds increases. In this
work, we propose a novel framework, FedZip, that significantly decreases the
size of updates while transferring weights from the deep learning model between
clients and their servers. FedZip implements Top-z sparsification, uses
quantization with clustering, and implements compression with three different
encoding methods. FedZip outperforms state-of-the-art compression frameworks
and reaches compression rates up to 1085x, and preserves up to 99% of bandwidth
and 99% of energy for clients during communication.
Related papers
- FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching [7.264549907717153]
Federated learning (FL) is a machine learning paradigm that facilitates massively distributed model training with end-user data on edge devices directed by a central server.
We introduce FedFetch, a strategy to mitigate the download time overhead caused by combining client sampling and compression techniques.
We empirically show that adding FedFetch to communication efficient FL techniques reduces end-to-end training time by 1.26$times$ and download time by 4.49$times$ across compression techniques with heterogeneous client settings.
arXiv Detail & Related papers (2025-04-21T18:17:05Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedLite: A Scalable Approach for Federated Learning on
Resource-constrained Clients [41.623518032533035]
In split learning, only a small part of the model is stored and trained on clients while the remaining large part of the model only stays at the servers.
This paper addresses this issue by compressing the additional communication using a novel clustering scheme accompanied by a gradient correction method.
arXiv Detail & Related papers (2022-01-28T00:09:53Z) - Comfetch: Federated Learning of Large Networks on Constrained Clients
via Sketching [28.990067638230254]
Federated learning (FL) is a popular paradigm for private and collaborative model training on the edge.
We propose a novel algorithm, Comdirectional, which allows clients to train large networks using representations of the global neural network.
arXiv Detail & Related papers (2021-09-17T04:48:42Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - RingFed: Reducing Communication Costs in Federated Learning on Non-IID
Data [3.7416826310878024]
Federated learning is used to protect the privacy of each client by exchanging model parameters rather than raw data.
This article proposes RingFed, a novel framework to reduce communication overhead during the training process of federated learning.
Experiments on two different public datasets show that RingFed has fast convergence, high model accuracy, and low communication cost.
arXiv Detail & Related papers (2021-07-19T13:43:10Z) - A Family of Hybrid Federated and Centralized Learning Architectures in
Machine Learning [7.99536002595393]
We propose hybrid federated and centralized learning (HFCL) for machine learning tasks.
In FL, only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them.
The HFCL frameworks outperform FL with up to $20%$ improvement in the learning accuracy when only half of the clients perform FL while having $50%$ less communication overhead than CL.
arXiv Detail & Related papers (2021-05-07T14:28:33Z) - Training Recommender Systems at Scale: Communication-Efficient Model and
Data Parallelism [56.78673028601739]
We propose a compression framework called Dynamic Communication Thresholding (DCT) for communication-efficient hybrid training.
DCT reduces communication by at least $100times$ and $20times$ during DP and MP, respectively.
It improves end-to-end training time for a state-of-the-art industrial recommender model by 37%, without any loss in performance.
arXiv Detail & Related papers (2020-10-18T01:44:42Z) - Information-Theoretic Bounds on the Generalization Error and Privacy
Leakage in Federated Learning [96.38757904624208]
Machine learning algorithms on mobile networks can be characterized into three different categories.
The main objective of this work is to provide an information-theoretic framework for all of the aforementioned learning paradigms.
arXiv Detail & Related papers (2020-05-05T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.