High-Frequency aware Perceptual Image Enhancement
- URL: http://arxiv.org/abs/2105.11711v1
- Date: Tue, 25 May 2021 07:33:14 GMT
- Title: High-Frequency aware Perceptual Image Enhancement
- Authors: Hyungmin Roh and Myungjoo Kang
- Abstract summary: We introduce a novel deep neural network suitable for multi-scale analysis and propose efficient model-agnostic methods.
Our model can be applied to multi-scale image enhancement problems including denoising, deblurring and single image super-resolution.
- Score: 0.08460698440162888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a novel deep neural network suitable for
multi-scale analysis and propose efficient model-agnostic methods that help the
network extract information from high-frequency domains to reconstruct clearer
images. Our model can be applied to multi-scale image enhancement problems
including denoising, deblurring and single image super-resolution. Experiments
on SIDD, Flickr2K, DIV2K, and REDS datasets show that our method achieves
state-of-the-art performance on each task. Furthermore, we show that our model
can overcome the over-smoothing problem commonly observed in existing
PSNR-oriented methods and generate more natural high-resolution images by
applying adversarial training.
Related papers
- Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Cross-resolution Face Recognition via Identity-Preserving Network and
Knowledge Distillation [12.090322373964124]
Cross-resolution face recognition is a challenging problem for modern deep face recognition systems.
This paper proposes a new approach that enforces the network to focus on the discriminative information stored in the low-frequency components of a low-resolution image.
arXiv Detail & Related papers (2023-03-15T14:52:46Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Single Image Internal Distribution Measurement Using Non-Local
Variational Autoencoder [11.985083962982909]
This paper proposes a novel image-specific solution, namely non-local variational autoencoder (textttNLVAE)
textttNLVAE is introduced as a self-supervised strategy that reconstructs high-resolution images using disentangled information from the non-local neighbourhood.
Experimental results from seven benchmark datasets demonstrate the effectiveness of the textttNLVAE model.
arXiv Detail & Related papers (2022-04-02T18:43:55Z) - Interpretable Deep Multimodal Image Super-Resolution [23.48305854574444]
Multimodal image super-resolution (SR) is the reconstruction of a high resolution image given a low-resolution observation with the aid of another image modality.
We present a multimodal deep network design that integrates coupled sparse priors and allows the effective fusion of information from another modality into the reconstruction process.
arXiv Detail & Related papers (2020-09-07T14:08:35Z) - Deep Iterative Residual Convolutional Network for Single Image
Super-Resolution [31.934084942626257]
We propose a deep Iterative Super-Resolution Residual Convolutional Network (ISRResCNet)
It exploits the powerful image regularization and large-scale optimization techniques by training the deep network in an iterative manner with a residual learning approach.
Our method with a few trainable parameters improves the results for different scaling factors in comparison with the state-of-art methods.
arXiv Detail & Related papers (2020-09-07T12:54:14Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Multimodal Deep Unfolding for Guided Image Super-Resolution [23.48305854574444]
Deep learning methods rely on training data to learn an end-to-end mapping from a low-resolution input to a high-resolution output.
We propose a multimodal deep learning design that incorporates sparse priors and allows the effective integration of information from another image modality into the network architecture.
Our solution relies on a novel deep unfolding operator, performing steps similar to an iterative algorithm for convolutional sparse coding with side information.
arXiv Detail & Related papers (2020-01-21T14:41:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.