Learned Image Compression with Text Quality Enhancement
- URL: http://arxiv.org/abs/2402.08643v1
- Date: Tue, 13 Feb 2024 18:20:04 GMT
- Title: Learned Image Compression with Text Quality Enhancement
- Authors: Chih-Yu Lai, Dung Tran, and Kazuhito Koishida
- Abstract summary: We propose to minimize a novel text logit loss designed to quantify the disparity in text between the original and reconstructed images.
Our findings reveal significant enhancements in the quality of reconstructed text upon integration of the proposed loss function with appropriate weighting.
- Score: 14.105456271662328
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learned image compression has gained widespread popularity for their
efficiency in achieving ultra-low bit-rates. Yet, images containing substantial
textual content, particularly screen-content images (SCI), often suffers from
text distortion at such compressed levels. To address this, we propose to
minimize a novel text logit loss designed to quantify the disparity in text
between the original and reconstructed images, thereby improving the perceptual
quality of the reconstructed text. Through rigorous experimentation across
diverse datasets and employing state-of-the-art algorithms, our findings reveal
significant enhancements in the quality of reconstructed text upon integration
of the proposed loss function with appropriate weighting. Notably, we achieve a
Bjontegaard delta (BD) rate of -32.64% for Character Error Rate (CER) and
-28.03% for Word Error Rate (WER) on average by applying the text logit loss
for two screenshot datasets. Additionally, we present quantitative metrics
tailored for evaluating text quality in image compression tasks. Our findings
underscore the efficacy and potential applicability of our proposed text logit
loss function across various text-aware image compression contexts.
Related papers
- Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity [18.469136842357095]
We develop a new text-guided image compression algorithm that achieves both high perceptual and pixel-wise fidelity.
By doing so, we avoid decoding based on text-guided generative models.
Our method can achieve high pixel-level and perceptual quality, with either human- or machine-generated captions.
arXiv Detail & Related papers (2024-03-05T13:15:01Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - Perceptual Image Compression with Cooperative Cross-Modal Side
Information [53.356714177243745]
We propose a novel deep image compression method with text-guided side information to achieve a better rate-perception-distortion tradeoff.
Specifically, we employ the CLIP text encoder and an effective Semantic-Spatial Aware block to fuse the text and image features.
arXiv Detail & Related papers (2023-11-23T08:31:11Z) - Multi-Modality Deep Network for Extreme Learned Image Compression [31.532613540054697]
We propose a multimodal machine learning method for text-guided image compression, in which semantic information of text is used as prior information to guide image compression performance.
In addition, we adopt the image-text attention module and image-request complement module to better fuse image and text features, and propose an improved multimodal semantic-consistent loss to produce semantically complete reconstructions.
arXiv Detail & Related papers (2023-04-26T14:22:59Z) - Semantic-Preserving Augmentation for Robust Image-Text Retrieval [27.2916415148638]
RVSE consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT)
Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors.
From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.
arXiv Detail & Related papers (2023-03-10T03:50:44Z) - Extreme Generative Image Compression by Learning Text Embedding from
Diffusion Models [13.894251782142584]
We propose a generative image compression method that demonstrates the potential of saving an image as a short text embedding.
Our method outperforms other state-of-the-art deep learning methods in terms of both perceptual quality and diversity.
arXiv Detail & Related papers (2022-11-14T22:54:19Z) - Scene Text Image Super-Resolution via Content Perceptual Loss and
Criss-Cross Transformer Blocks [48.81850740907517]
We present TATSR, a Text-Aware Text Super-Resolution framework.
It effectively learns the unique text characteristics using Criss-Cross Transformer Blocks (CCTBs) and a novel Content Perceptual (CP) Loss.
It outperforms state-of-the-art methods in terms of both recognition accuracy and human perception.
arXiv Detail & Related papers (2022-10-13T11:48:45Z) - Scene Text Image Super-Resolution in the Wild [112.90416737357141]
Low-resolution text images are often seen in natural scenes such as documents captured by mobile phones.
Previous single image super-resolution (SISR) methods are trained on synthetic low-resolution images.
We pro-pose a real scene text SR dataset, termed TextZoom.
It contains paired real low-resolution and high-resolution images captured by cameras with different focal length in the wild.
arXiv Detail & Related papers (2020-05-07T09:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.