HFLIC: Human Friendly Perceptual Learned Image Compression with
Reinforced Transform
- URL: http://arxiv.org/abs/2305.07519v4
- Date: Thu, 18 May 2023 08:52:49 GMT
- Title: HFLIC: Human Friendly Perceptual Learned Image Compression with
Reinforced Transform
- Authors: Peirong Ning, Wei Jiang, Ronggang Wang
- Abstract summary: Current learning-based image compression methods often sacrifice human-friendly compression and require long decoding times.
We propose enhancements to the backbone network and loss function of existing image compression model, focusing on improving human perception and efficiency.
- Score: 16.173583505483272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been rapid development in learned image
compression techniques that prioritize ratedistortion-perceptual compression,
preserving fine details even at lower bit-rates. However, current
learning-based image compression methods often sacrifice human-friendly
compression and require long decoding times. In this paper, we propose
enhancements to the backbone network and loss function of existing image
compression model, focusing on improving human perception and efficiency. Our
proposed approach achieves competitive subjective results compared to
state-of-the-art end-to-end learned image compression methods and classic
methods, while requiring less decoding time and offering human-friendly
compression. Through empirical evaluation, we demonstrate the effectiveness of
our proposed method in achieving outstanding performance, with more than 25%
bit-rate saving at the same subjective quality.
Related papers
- Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Streaming Lossless Volumetric Compression of Medical Images Using Gated
Recurrent Convolutional Neural Network [0.0]
This paper introduces a hardware-friendly streaming lossless volumetric compression framework.
We propose a gated recurrent convolutional neural network that combines diverse convolutional structures and fusion gate mechanisms.
Our method exhibits robust generalization ability and competitive compression speed.
arXiv Detail & Related papers (2023-11-27T07:19:09Z) - A Unified Image Preprocessing Framework For Image Compression [5.813935823171752]
We propose a unified image compression preprocessing framework, called Kuchen, to improve the performance of existing codecs.
The framework consists of a hybrid data labeling system along with a learning-based backbone to simulate personalized preprocessing.
Results demonstrate that the modern codecs optimized by our unified preprocessing framework constantly improve the efficiency of the state-of-the-art compression.
arXiv Detail & Related papers (2022-08-15T10:41:00Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - ELIC: Efficient Learned Image Compression with Unevenly Grouped
Space-Channel Contextual Adaptive Coding [9.908820641439368]
We propose an efficient model, ELIC, to achieve state-of-the-art speed and compression ability.
With superior performance, the proposed model also supports extremely fast preview decoding and progressive decoding.
arXiv Detail & Related papers (2022-03-21T11:19:50Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Learned Image Compression for Machine Perception [17.40776913809306]
We develop a framework that produces a compression format suitable for both human perception and machine perception.
We show that representations can be learned that simultaneously optimize for compression and performance on core vision tasks.
arXiv Detail & Related papers (2021-11-03T14:39:09Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.