DoCoFL: Downlink Compression for Cross-Device Federated Learning
- URL: http://arxiv.org/abs/2302.00543v2
- Date: Thu, 13 Jul 2023 16:03:19 GMT
- Title: DoCoFL: Downlink Compression for Cross-Device Federated Learning
- Authors: Ron Dorfman, Shay Vargaftik, Yaniv Ben-Itzhak, Kfir Y. Levy
- Abstract summary: $textsfDoCoFL$ is a new framework for downlink compression in the cross-device setting.
It offers significant bi-directional bandwidth reduction while achieving competitive accuracy to that of a baseline without any compression.
- Score: 12.363097878376644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many compression techniques have been proposed to reduce the communication
overhead of Federated Learning training procedures. However, these are
typically designed for compressing model updates, which are expected to decay
throughout training. As a result, such methods are inapplicable to downlink
(i.e., from the parameter server to clients) compression in the cross-device
setting, where heterogeneous clients $\textit{may appear only once}$ during
training and thus must download the model parameters. Accordingly, we propose
$\textsf{DoCoFL}$ -- a new framework for downlink compression in the
cross-device setting. Importantly, $\textsf{DoCoFL}$ can be seamlessly combined
with many uplink compression schemes, rendering it suitable for bi-directional
compression. Through extensive evaluation, we show that $\textsf{DoCoFL}$
offers significant bi-directional bandwidth reduction while achieving
competitive accuracy to that of a baseline without any compression.
Related papers
- Lightweight Correlation-Aware Table Compression [58.50312417249682]
$texttVirtual$ is a framework that integrates seamlessly with existing open formats.
Experiments on data-gov datasets show that $texttVirtual$ reduces file sizes by up to 40% compared to Apache Parquet.
arXiv Detail & Related papers (2024-10-17T22:28:07Z) - Lossy and Lossless (L$^2$) Post-training Model Size Compression [12.926354646945397]
We propose a post-training model size compression method that combines lossy and lossless compression in a unified way.
Our method can achieve a stable $10times$ compression ratio without sacrificing accuracy and a $20times$ compression ratio with minor accuracy loss in a short time.
arXiv Detail & Related papers (2023-08-08T14:10:16Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - L$_0$onie: Compressing COINs with L$_0$-constraints [0.4568777157687961]
Implicit Neural Representations (INR) have motivated research on domain-agnostic compression techniques.
We propose a sparsity-constrained extension of the COIN compression method.
arXiv Detail & Related papers (2022-07-08T22:24:56Z) - Compressed-VFL: Communication-Efficient Learning with Vertically
Partitioned Data [15.85259386116784]
We propose Compressed Vertical Learning (C-VFL) for communication training on vertically partitioned data.
We show experimentally that VFL can reduce communication by over $90%$ without a significant decrease of compression accuracy.
arXiv Detail & Related papers (2022-06-16T17:34:07Z) - ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training [65.68511423300812]
We propose ProgFed, a progressive training framework for efficient and effective federated learning.
ProgFed inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models.
Our results show that ProgFed converges at the same rate as standard training on full models.
arXiv Detail & Related papers (2021-10-11T14:45:00Z) - CD-SGD: Distributed Stochastic Gradient Descent with Compression and
Delay Compensation [3.0786359925181315]
Communication overhead is the key challenge for distributed computation training.
gradient compression technique can greatly alleviate the impact of communication overhead.
However, gradient compression brings in extra cost, which will delay the next training iteration.
arXiv Detail & Related papers (2021-06-21T01:15:12Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Compressed Communication for Distributed Training: Adaptive Methods and
System [13.244482588437972]
Communication overhead severely hinders the scalability of distributed machine learning systems.
Recently, there has been a growing interest in using gradient compression to reduce the communication overhead.
In this paper, we first introduce a novel adaptive gradient method with gradient compression.
arXiv Detail & Related papers (2021-05-17T13:41:47Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z) - GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework [94.26938614206689]
We propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming.
We apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation.
arXiv Detail & Related papers (2020-08-25T14:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.