Restoration of the JPEG Maximum Lossy Compressed Face Images with
Hourglass Block based on Early Stopping Discriminator
- URL: http://arxiv.org/abs/2306.12757v1
- Date: Thu, 22 Jun 2023 09:21:48 GMT
- Title: Restoration of the JPEG Maximum Lossy Compressed Face Images with
Hourglass Block based on Early Stopping Discriminator
- Authors: Jongwook Si and Sungyoung Kim
- Abstract summary: This paper addresses the restoration of JPEG images that have suffered significant loss due to maximum compression using a GAN-based net-work method.
The network incorporates two loss functions, LF Loss and HF Loss, to generate natural and high-performance images.
Results show that the blocking phe-nomenon in lost compressed images was removed, and recognizable identities were generated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When a JPEG image is compressed using the loss compression method with a high
compression rate, a blocking phenomenon can occur in the image, making it
necessary to restore the image to its original quality. In particular,
restoring compressed images that are unrecognizable presents an innovative
challenge. Therefore, this paper aims to address the restoration of JPEG images
that have suffered significant loss due to maximum compression using a
GAN-based net-work method. The generator in this network is based on the U-Net
architecture and features a newly presented hourglass structure that can
preserve the charac-teristics of deep layers. Additionally, the network
incorporates two loss functions, LF Loss and HF Loss, to generate natural and
high-performance images. HF Loss uses a pretrained VGG-16 network and is
configured using a specific layer that best represents features, which can
enhance performance for the high-frequency region. LF Loss, on the other hand,
is used to handle the low-frequency region. These two loss functions facilitate
the generation of images by the generator that can deceive the discriminator
while accurately generating both high and low-frequency regions. The results
show that the blocking phe-nomenon in lost compressed images was removed, and
recognizable identities were generated. This study represents a significant
improvement over previous research in terms of image restoration performance.
Related papers
- Study of the gOMP Algorithm for Recovery of Compressed Sensed
Hyperspectral Images [0.0]
This work studies a data sparsification pre-processing stage prior to compression to ensure the sparsity of the pixels.
Since the image pixels are not strictly sparse, this work studies a data sparsification pre-processing stage prior to compression to ensure the sparsity of the pixels.
arXiv Detail & Related papers (2024-01-26T11:20:11Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Improving Multi-generation Robustness of Learned Image Compression [16.86614420872084]
We show that LIC can achieve comparable performance to the first compression of BPG even after 50 times reencoding without any change of the network structure.
arXiv Detail & Related papers (2022-10-31T03:26:11Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Training a Better Loss Function for Image Restoration [17.20936270604533]
We show that a single natural image is sufficient to train a lightweight feature extractor that outperforms state-of-the-art loss functions in single image super resolution.
We propose a novel Multi-Scale Discriminative Feature (MDF) loss comprising a series of discriminators, trained to penalize errors introduced by a generator.
arXiv Detail & Related papers (2021-03-26T17:29:57Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.