Federated learning compression designed for lightweight communications
- URL: http://arxiv.org/abs/2310.14693v1
- Date: Mon, 23 Oct 2023 08:36:21 GMT
- Title: Federated learning compression designed for lightweight communications
- Authors: Lucas Grativol Ribeiro (IMT Atlantique - MEE, Lab_STICC_BRAIn,
Lab-STICC_2AI, LHC), Mathieu Leonardon (IMT Atlantique - MEE,
Lab_STICC_BRAIn), Guillaume Muller, Virginie Fresse, Matthieu Arzel (IMT
Atlantique - MEE, Lab-STICC_2AI)
- Abstract summary: Federated Learning (FL) is a promising distributed machine learning method for edge-level machine learning.
In this paper, we investigate the impact of compression techniques on FL for a typical image classification task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising distributed method for edge-level
machine learning, particularly for privacysensitive applications such as those
in military and medical domains, where client data cannot be shared or
transferred to a cloud computing server. In many use-cases, communication cost
is a major challenge in FL due to its natural intensive network usage. Client
devices, such as smartphones or Internet of Things (IoT) nodes, have limited
resources in terms of energy, computation, and memory. To address these
hardware constraints, lightweight models and compression techniques such as
pruning and quantization are commonly adopted in centralised paradigms. In this
paper, we investigate the impact of compression techniques on FL for a typical
image classification task. Going further, we demonstrate that a straightforward
method can compresses messages up to 50% while having less than 1% of accuracy
loss, competing with state-of-the-art techniques.
Related papers
- A Survey on Transformer Compression [84.18094368700379]
Transformer plays a vital role in the realms of natural language processing (NLP) and computer vision (CV)
Model compression methods reduce the memory and computational cost of Transformer.
This survey provides a comprehensive review of recent compression methods, with a specific focus on their application to Transformer-based models.
arXiv Detail & Related papers (2024-02-05T12:16:28Z) - Towards Hardware-Specific Automatic Compression of Neural Networks [0.0]
pruning and quantization are the major approaches to compress neural networks nowadays.
Effective compression policies consider the influence of the specific hardware architecture on the used compression methods.
We propose an algorithmic framework called Galen to search such policies using reinforcement learning utilizing pruning and quantization.
arXiv Detail & Related papers (2022-12-15T13:34:02Z) - A Machine Learning Framework for Distributed Functional Compression over
Wireless Channels in IoT [13.385373310554327]
IoT devices generate enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.
Traditional cloud-based methods that focus on transferring data to a central location either for training or inference place enormous strain on network resources.
We develop, to the best of our knowledge, the first machine learning framework for distributed functional compression over both the Gaussian Multiple Access Channel (GMAC) and AWGN channels.
arXiv Detail & Related papers (2022-01-24T06:38:39Z) - ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training [65.68511423300812]
We propose ProgFed, a progressive training framework for efficient and effective federated learning.
ProgFed inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models.
Our results show that ProgFed converges at the same rate as standard training on full models.
arXiv Detail & Related papers (2021-10-11T14:45:00Z) - Supervised Compression for Resource-constrained Edge Computing Systems [26.676557573171618]
Full-scale deep neural networks are often too resource-intensive in terms of energy and storage.
This paper adopts ideas from knowledge distillation and neural image compression to compress intermediate feature representations more efficiently.
It achieves better supervised rate-distortion performance while also maintaining smaller end-to-end latency.
arXiv Detail & Related papers (2021-08-21T11:10:29Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - A flexible framework for communication-efficient machine learning: from
HPC to IoT [13.300503079779952]
Communication-efficiency is now needed in a variety of different system architectures.
We propose a flexible framework which adapts the compression level to the true gradient at each iteration.
Our framework is easy to adapt from one technology to the next by modeling how the communication cost depends on the compression level for the specific technology.
arXiv Detail & Related papers (2020-03-13T16:49:08Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.