A Fast Quantum Image Compression Algorithm based on Taylor Expansion
- URL: http://arxiv.org/abs/2502.10684v1
- Date: Sat, 15 Feb 2025 06:03:49 GMT
- Title: A Fast Quantum Image Compression Algorithm based on Taylor Expansion
- Authors: Vu Tuan Hai, Huynh Ho Thi Mong Trinh, Pham Hoai Luan,
- Abstract summary: In this study, we upgrade a quantum image compression algorithm within parameterized quantum circuits.
Our approach encodes image data as unitary operator parameters and applies the quantum compilation algorithm to emulate the encryption process.
Experimental results on benchmark images, including Lenna and Cameraman, show that our method achieves up to 86% reduction in the number of iterations.
- Score: 0.0
- License:
- Abstract: With the increasing demand for storing images, traditional image compression methods face challenges in balancing the compressed size and image quality. However, the hybrid quantum-classical model can recover this weakness by using the advantage of qubits. In this study, we upgrade a quantum image compression algorithm within parameterized quantum circuits. Our approach encodes image data as unitary operator parameters and applies the quantum compilation algorithm to emulate the encryption process. By utilizing first-order Taylor expansion, we significantly reduce both the computational cost and loss, better than the previous version. Experimental results on benchmark images, including Lenna and Cameraman, show that our method achieves up to 86\% reduction in the number of iterations while maintaining a lower compression loss, better for high-resolution images. The results confirm that the proposed algorithm provides an efficient and scalable image compression mechanism, making it a promising candidate for future image processing applications.
Related papers
- DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding [27.875207681547074]
progressive image coding (PIC) aims to compress various qualities of images into a single bitstream.
Research on neural network (NN)-based PIC is in its early stages.
We propose an NN-based progressive coding method that firstly utilizes learned quantization step sizes via learning for each quantization layer.
arXiv Detail & Related papers (2024-08-22T06:32:53Z) - MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model [78.4051835615796]
This paper proposes a method called Multimodal Image Semantic Compression.
It consists of an LMM encoder for extracting the semantic information of the image, a map encoder to locate the region corresponding to the semantic, an image encoder generates an extremely compressed bitstream, and a decoder reconstructs the image based on the above information.
It can achieve optimal consistency and perception results while saving perceptual 50%, which has strong potential applications in the next generation of storage and communication.
arXiv Detail & Related papers (2024-02-26T17:11:11Z) - Efficient quantum image representation and compression circuit using
zero-discarded state preparation approach [9.653976364051564]
A novel zero-discarded state connection novel enhance quantum representation (ZSCNEQR) is introduced to reduce complexity further.
The proposed method requires 11.76% less qubits compared to the recent existing method.
arXiv Detail & Related papers (2023-06-22T02:18:56Z) - Advance quantum image representation and compression using DCTEFRQI
approach [0.5735035463793007]
We have proposed a DCTEFRQI (Direct Cosine Transform Efficient Flexible Representation of Quantum Image) algorithm to represent and compress gray image efficiently.
The objective of this work is to represent and compress various gray image size in quantum computer using DCT(Discrete Cosine Transform) and EFRQI (Efficient Flexible Representation of Quantum Image) approach together.
arXiv Detail & Related papers (2022-08-30T13:54:09Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Wavelet Feature Maps Compression for Image-to-Image CNNs [3.1542695050861544]
We propose a novel approach for high-resolution activation maps compression integrated with point-wise convolutions.
We achieve compression rates equivalent to 1-4bit activation quantization with relatively small and much more graceful degradation in performance.
arXiv Detail & Related papers (2022-05-24T20:29:19Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.