Metaheuristic-based Energy-aware Image Compression for Mobile App
Development
- URL: http://arxiv.org/abs/2212.06313v2
- Date: Thu, 20 Apr 2023 14:38:48 GMT
- Title: Metaheuristic-based Energy-aware Image Compression for Mobile App
Development
- Authors: Seyed Jalaleddin Mousavirad, Lu\'is A Alexandre
- Abstract summary: We propose a novel objective function for population-based JPEG image compression.
Second, to tackle the lack of comprehensive coverage, we suggest a novel representation.
Third, we provide a comprehensive benchmark on 22 state-of-the-art and recently-introduced PBMH algorithms.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The JPEG standard is widely used in different image processing applications.
One of the main components of the JPEG standard is the quantisation table (QT)
since it plays a vital role in the image properties such as image quality and
file size. In recent years, several efforts based on population-based
metaheuristic (PBMH) algorithms have been performed to find the proper QT(s)
for a specific image, although they do not take into consideration the user
opinion in advance. Take an android developer as an example, who prefers a
small-size image, while the optimisation process results in a high-quality
image, leading to a huge file size. Another pitfall of the current works is a
lack of comprehensive coverage, meaning that the QT(s) can not provide all
possible combinations of file size and quality. Therefore, this paper aims to
propose three distinct contributions. First, to include the user opinion in the
compression process, the file size of the output image can be controlled by a
user in advance. To this end, we propose a novel objective function for
population-based JPEG image compression. Second, to tackle the lack of
comprehensive coverage, we suggest a novel representation. Our proposed
representation can not only provide more comprehensive coverage but also find
the proper value for the quality factor for a specific image without any
background knowledge. Both changes in representation and objective function are
independent of the search strategies and can be used with any type of
population-based metaheuristic (PBMH) algorithm. Therefore, as the third
contribution, we also provide a comprehensive benchmark on 22 state-of-the-art
and recently-introduced PBMH algorithms. Our extensive experiments on different
benchmark images and in terms of different criteria show that our novel
formulation for JPEG image compression can work effectively.
Related papers
- Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - Raw Image Reconstruction with Learned Compact Metadata [61.62454853089346]
We propose a novel framework to learn a compact representation in the latent space serving as the metadata in an end-to-end manner.
We show how the proposed raw image compression scheme can adaptively allocate more bits to image regions that are important from a global perspective.
arXiv Detail & Related papers (2023-02-25T05:29:45Z) - High-Perceptual Quality JPEG Decoding via Posterior Sampling [13.238373528922194]
We propose a different paradigm for JPEG artifact correction.
We aim to obtain sharp, detailed and visually reconstructed images, while being consistent with the compressed input.
Our solution offers a diverse set of plausible and fast reconstructions for a given input with perfect consistency.
arXiv Detail & Related papers (2022-11-21T19:47:59Z) - Perceptual Quality Assessment for Fine-Grained Compressed Images [38.615746092795625]
We propose a full-reference image quality assessment (FR-IQA) method for compressed images of fine-grained levels.
The proposed method is validated on the fine-grained compression image quality assessment (FGIQA) database.
arXiv Detail & Related papers (2022-06-08T12:56:45Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Learning to Improve Image Compression without Changing the Standard
Decoder [100.32492297717056]
We propose learning to improve the encoding performance with the standard decoder.
Specifically, a frequency-domain pre-editing method is proposed to optimize the distribution of DCT coefficients.
We do not modify the JPEG decoder and therefore our approach is applicable when viewing images with the widely used standard JPEG decoder.
arXiv Detail & Related papers (2020-09-27T19:24:42Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Optimizing JPEG Quantization for Classification Networks [32.20485214224392]
We show that a simple sorted random sampling method can exceed the performance of the standard JPEG Q-table.
New Q-tables can improve the compression rate by 10% to 200% when the accuracy is fixed, or improve accuracy up to $2%$ at the same compression rate.
arXiv Detail & Related papers (2020-03-05T19:13:06Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z) - Saliency Driven Perceptual Image Compression [6.201592931432016]
The paper demonstrates that the popularly used evaluations metrics such as MS-SSIM and PSNR are inadequate for judging the performance of image compression techniques.
A new metric is proposed, which is learned on perceptual similarity data specific to image compression.
The model not only generates images which are visually better but also gives superior performance for subsequent computer vision tasks.
arXiv Detail & Related papers (2020-02-12T13:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.