Probing Image Compression For Class-Incremental Learning
- URL: http://arxiv.org/abs/2403.06288v1
- Date: Sun, 10 Mar 2024 18:58:14 GMT
- Title: Probing Image Compression For Class-Incremental Learning
- Authors: Justin Yang, Zhihao Duan, Andrew Peng, Yuning Huang, Jiangpeng He,
Fengqing Zhu
- Abstract summary: Continual machine learning (ML) systems rely on storing representative samples, also known as exemplars, within a limited memory constraint to maintain the performance on previously learned data.
In this paper, we explore the use of image compression as a strategy to enhance the buffer's capacity, thereby increasing exemplar diversity.
We introduce a new framework to incorporate image compression for continual ML including a pre-processing data compression step and an efficient compression rate/algorithm selection method.
- Score: 8.711266563753846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image compression emerges as a pivotal tool in the efficient handling and
transmission of digital images. Its ability to substantially reduce file size
not only facilitates enhanced data storage capacity but also potentially brings
advantages to the development of continual machine learning (ML) systems, which
learn new knowledge incrementally from sequential data. Continual ML systems
often rely on storing representative samples, also known as exemplars, within a
limited memory constraint to maintain the performance on previously learned
data. These methods are known as memory replay-based algorithms and have proven
effective at mitigating the detrimental effects of catastrophic forgetting.
Nonetheless, the limited memory buffer size often falls short of adequately
representing the entire data distribution. In this paper, we explore the use of
image compression as a strategy to enhance the buffer's capacity, thereby
increasing exemplar diversity. However, directly using compressed exemplars
introduces domain shift during continual ML, marked by a discrepancy between
compressed training data and uncompressed testing data. Additionally, it is
essential to determine the appropriate compression algorithm and select the
most effective rate for continual ML systems to balance the trade-off between
exemplar quality and quantity. To this end, we introduce a new framework to
incorporate image compression for continual ML including a pre-processing data
compression step and an efficient compression rate/algorithm selection method.
We conduct extensive experiments on CIFAR-100 and ImageNet datasets and show
that our method significantly improves image classification accuracy in
continual ML settings.
Related papers
- Scaling Training Data with Lossy Image Compression [8.05574597775852]
In computer vision, images are inherently analog, but are always stored in a digital format using a finite number of bits.
We propose a storage scaling law' that describes the joint evolution of test error with sample size and number of bits per image.
We prove that this law holds within a stylized model for image compression, and verify it empirically on two computer vision tasks.
arXiv Detail & Related papers (2024-07-25T11:19:55Z) - Learned Image Compression for HE-stained Histopathological Images via Stain Deconvolution [33.69980388844034]
In this paper, we show that the commonly used JPEG algorithm is not best suited for further compression.
We propose Stain Quantized Latent Compression, a novel DL based histopathology data compression approach.
We show that our approach yields superior performance in a classification downstream task, compared to traditional approaches like JPEG.
arXiv Detail & Related papers (2024-06-18T13:47:17Z) - Deep learning based Image Compression for Microscopy Images: An
Empirical Study [3.915183869199319]
This study analyzes classic and deep learning based image compression methods, and their impact on deep learning based image processing models.
To compress images in such a wanted way, multiple classical lossy image compression techniques are compared to several AI-based compression models.
We found that AI-based compression techniques largely outperform the classic ones and will minimally affect the downstream label-free task in 2D cases.
arXiv Detail & Related papers (2023-11-02T16:00:32Z) - Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - A Unified Image Preprocessing Framework For Image Compression [5.813935823171752]
We propose a unified image compression preprocessing framework, called Kuchen, to improve the performance of existing codecs.
The framework consists of a hybrid data labeling system along with a learning-based backbone to simulate personalized preprocessing.
Results demonstrate that the modern codecs optimized by our unified preprocessing framework constantly improve the efficiency of the state-of-the-art compression.
arXiv Detail & Related papers (2022-08-15T10:41:00Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.