Domain Adaptation Regularization for Spectral Pruning
- URL: http://arxiv.org/abs/1912.11853v3
- Date: Tue, 25 Aug 2020 09:08:08 GMT
- Title: Domain Adaptation Regularization for Spectral Pruning
- Authors: Laurent Dillard, Yosuke Shinya, Taiji Suzuki
- Abstract summary: Domain Adaptation (DA) addresses this issue by allowing knowledge learned on one labeled source distribution to be transferred to a target distribution, possibly unlabeled.
We show that our method outperforms an existing compression method studied in the DA setting by a large margin for high compression rates.
Although our work is based on one specific compression method, we also outline some general guidelines for improving compression in DA setting.
- Score: 44.060724281001775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have recently been achieving state-of-the-art
performance on a variety of computer vision related tasks. However, their
computational cost limits their ability to be implemented in embedded systems
with restricted resources or strict latency constraints. Model compression has
therefore been an active field of research to overcome this issue.
Additionally, DNNs typically require massive amounts of labeled data to be
trained. This represents a second limitation to their deployment. Domain
Adaptation (DA) addresses this issue by allowing knowledge learned on one
labeled source distribution to be transferred to a target distribution,
possibly unlabeled. In this paper, we investigate on possible improvements of
compression methods in DA setting. We focus on a compression method that was
previously developed in the context of a single data distribution and show
that, with a careful choice of data to use during compression and additional
regularization terms directly related to DA objectives, it is possible to
improve compression results. We also show that our method outperforms an
existing compression method studied in the DA setting by a large margin for
high compression rates. Although our work is based on one specific compression
method, we also outline some general guidelines for improving compression in DA
setting.
Related papers
- ODDN: Addressing Unpaired Data Challenges in Open-World Deepfake Detection on Online Social Networks [51.03118447290247]
We propose the open-world deepfake detection network (ODDN), which comprises open-world data aggregation (ODA) and compression-discard gradient correction (CGC)
ODA effectively aggregates correlations between compressed and raw samples through both fine-grained and coarse-grained analyses.
CGC incorporates a compression-discard gradient correction to further enhance performance across diverse compression methods in online social networks (OSNs)
arXiv Detail & Related papers (2024-10-24T12:32:22Z) - Sparse $L^1$-Autoencoders for Scientific Data Compression [0.0]
We introduce effective data compression methods by developing autoencoders using high dimensional latent spaces that are $L1$-regularized.
We show how these information-rich latent spaces can be used to mitigate blurring and other artifacts to obtain highly effective data compression methods for scientific data.
arXiv Detail & Related papers (2024-05-23T07:48:00Z) - Data-Aware Gradient Compression for DML in Communication-Constrained Mobile Computing [20.70238092277094]
This work derives the convergence rate of distributed machine learning with non-uniform compression.
We propose DAGC-R, which assigns conservative compression to workers handling larger data volumes.
Our experiments confirm that the DAGC-A and DAGC-R can speed up the training speed by up to $16.65%$ and $25.43%$ respectively.
arXiv Detail & Related papers (2023-11-13T13:24:09Z) - Optimal Compression of Unit Norm Vectors in the High Distortion Regime [30.6205706348233]
We investigate the method for compressing a unit norm vector into the minimum number of bits, while still allowing for some acceptable level of distortion in recovery.
Our study considers both biased and unbiased compression methods and determines the optimal compression rates.
While the results are a mix of new and known, they are compiled in this paper for completeness.
arXiv Detail & Related papers (2023-07-16T04:23:57Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.