Continuous Patch Stitching for Block-wise Image Compression
- URL: http://arxiv.org/abs/2502.16795v1
- Date: Mon, 24 Feb 2025 03:11:59 GMT
- Title: Continuous Patch Stitching for Block-wise Image Compression
- Authors: Zifu Zhang, Shengxi Li, Henan Liu, Mai Xu, Ce Zhu,
- Abstract summary: We propose a novel continuous patch stitching (CPS) framework for block-wise image compression.<n>Our CPS framework achieves the state-of-the-art performance against existing baselines, whilst requiring less than half of computing resources of existing models.
- Score: 56.97857167461269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recently, learned image compression methods have outpaced traditional hand-crafted standard codecs. However, their inference typically requires to input the whole image at the cost of heavy computing resources, especially for high-resolution image compression; otherwise, the block artefact can exist when compressed by blocks within existing learned image compression methods. To address this issue, we propose a novel continuous patch stitching (CPS) framework for block-wise image compression that is able to achieve seamlessly patch stitching and mathematically eliminate block artefact, thus capable of significantly reducing the required computing resources when compressing images. More specifically, the proposed CPS framework is achieved by padding-free operations throughout, with a newly established parallel overlapping stitching strategy to provide a general upper bound for ensuring the continuity. Upon this, we further propose functional residual blocks with even-sized kernels to achieve down-sampling and up-sampling, together with bottleneck residual blocks retaining feature size to increase network depth. Experimental results demonstrate that our CPS framework achieves the state-of-the-art performance against existing baselines, whilst requiring less than half of computing resources of existing models. Our code shall be released upon acceptance.
Related papers
- Hierarchical Semantic Compression for Consistent Image Semantic Restoration [62.97519327310638]
We propose a novel hierarchical semantic compression (HSC) framework that purely operates within intrinsic semantic spaces from generative models.<n> Experimental results demonstrate that the proposed HSC framework achieves the state-of-the-art performance on subjective quality and consistency for human vision.
arXiv Detail & Related papers (2025-02-24T03:20:44Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Streaming Lossless Volumetric Compression of Medical Images Using Gated
Recurrent Convolutional Neural Network [0.0]
This paper introduces a hardware-friendly streaming lossless volumetric compression framework.
We propose a gated recurrent convolutional neural network that combines diverse convolutional structures and fusion gate mechanisms.
Our method exhibits robust generalization ability and competitive compression speed.
arXiv Detail & Related papers (2023-11-27T07:19:09Z) - Exploring Resolution Fields for Scalable Image Compression with
Uncertainty Guidance [47.96024424475888]
In this work, we explore the potential of resolution fields in scalable image compression.
We propose the reciprocal pyramid network (RPN) that fulfills the need for more adaptable and versatile compression.
Experiments show the superiority of RPN against existing classical and deep learning-based scalable codecs.
arXiv Detail & Related papers (2023-06-15T08:26:24Z) - Random-Access Neural Compression of Material Textures [1.2971248363246106]
We propose a novel neural compression technique specifically designed for material textures.
We unlock two more levels of detail, i.e., 16x more texels, using low compression.
Our method allows on-demand, real-time decompression with random access, enabling compression on disk and memory.
arXiv Detail & Related papers (2023-05-26T17:16:22Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Identity Preserving Loss for Learned Image Compression [0.0]
This work proposes an end-to-end image compression framework that learns domain-specific features to achieve higher compression ratios.
We present a novel Identity Preserving Reconstruction (IPR) loss function which achieves Bits-Per-Pixel (BPP) values that are 38% and 42% of CRF-23 HEVC compression.
We show at-par recognition performance on the LFW dataset with an unseen recognition model while retaining a lower BPP value of 38% of CRF-23 HEVC compression.
arXiv Detail & Related papers (2022-04-22T18:01:01Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Learned Block-based Hybrid Image Compression [33.44942603425436]
Recent works on learned image compression perform encoding and decoding processes in a full-resolution manner.
Full-resolution inference often causes the out-of-memory(OOM) problem with limited GPU resources.
This paper provides a learned block-based hybrid image compression framework.
arXiv Detail & Related papers (2020-12-17T12:47:39Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.