Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization
- URL: http://arxiv.org/abs/2004.03150v1
- Date: Tue, 7 Apr 2020 06:45:01 GMT
- Title: Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization
- Authors: Yang Zhang, Changhui Hu, and Xiaobo Lu
- Abstract summary: De-quantization can improve the visual quality of low bit-depth image to display on high bit-depth screen.
This paper proposes DAGAN algorithm to perform super-resolution on image intensity resolution.
DenseResAtt module consists of dense residual blocks armed with self-attention mechanism.
- Score: 25.805568996596783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of current display devices are with eight or higher bit-depth. However,
the quality of most multimedia tools cannot achieve this bit-depth standard for
the generating images. De-quantization can improve the visual quality of low
bit-depth image to display on high bit-depth screen. This paper proposes DAGAN
algorithm to perform super-resolution on image intensity resolution, which is
orthogonal to the spatial resolution, realizing photo-realistic de-quantization
via an end-to-end learning pattern. Until now, this is the first attempt to
apply Generative Adversarial Network (GAN) framework for image de-quantization.
Specifically, we propose the Dense Residual Self-attention (DenseResAtt)
module, which is consisted of dense residual blocks armed with self-attention
mechanism, to pay more attention on high-frequency information. Moreover, the
series connection of sequential DenseResAtt modules forms deep attentive
network with superior discriminative learning ability in image de-quantization,
modeling representative feature maps to recover as much useful information as
possible. In addition, due to the adversarial learning framework can reliably
produce high quality natural images, the specified content loss as well as the
adversarial loss are back-propagated to optimize the training of model. Above
all, DAGAN is able to generate the photo-realistic high bit-depth image without
banding artifacts. Experiment results on several public benchmarks prove that
the DAGAN algorithm possesses ability to achieve excellent visual effect and
satisfied quantitative performance.
Related papers
- Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity [69.32473738284374]
We propose DiffDIS, a diffusion-driven segmentation model that taps into the potential of the pre-trained U-Net within diffusion models.
By leveraging the robust generalization capabilities and rich, versatile image representation prior to the SD models, we significantly reduce the inference time while preserving high-fidelity, detailed generation.
Experiments on the DIS5K dataset demonstrate the superiority of DiffDIS, achieving state-of-the-art results through a streamlined inference process.
arXiv Detail & Related papers (2024-10-14T02:49:23Z) - Research on Image Super-Resolution Reconstruction Mechanism based on Convolutional Neural Network [8.739451985459638]
Super-resolution algorithms transform one or more sets of low-resolution images captured from the same scene into high-resolution images.
The extraction of image features and nonlinear mapping methods in the reconstruction process remain challenging for existing algorithms.
The objective is to recover high-quality, high-resolution images from low-resolution images.
arXiv Detail & Related papers (2024-07-18T06:50:39Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - High-Frequency aware Perceptual Image Enhancement [0.08460698440162888]
We introduce a novel deep neural network suitable for multi-scale analysis and propose efficient model-agnostic methods.
Our model can be applied to multi-scale image enhancement problems including denoising, deblurring and single image super-resolution.
arXiv Detail & Related papers (2021-05-25T07:33:14Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.