Self-Adaptively Learning to Demoire from Focused and Defocused Image
Pairs
- URL: http://arxiv.org/abs/2011.02055v2
- Date: Thu, 5 Nov 2020 10:19:40 GMT
- Title: Self-Adaptively Learning to Demoire from Focused and Defocused Image
Pairs
- Authors: Lin Liu, Shanxin Yuan, Jianzhuang Liu, Liping Bao, Gregory Slabaugh,
Qi Tian
- Abstract summary: Moire artifacts are common in digital photography, resulting from the interference between high-frequency scene content and the color filter array of the camera.
Existing deep learning-based demoireing methods trained on large scale iteration are limited in handling various complex moire patterns.
We propose a self-adaptive learning method for demoireing a high-frequency image, with the help of an additional defocused moire-free blur image.
- Score: 97.67638106818613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moire artifacts are common in digital photography, resulting from the
interference between high-frequency scene content and the color filter array of
the camera. Existing deep learning-based demoireing methods trained on large
scale datasets are limited in handling various complex moire patterns, and
mainly focus on demoireing of photos taken of digital displays. Moreover,
obtaining moire-free ground-truth in natural scenes is difficult but needed for
training. In this paper, we propose a self-adaptive learning method for
demoireing a high-frequency image, with the help of an additional defocused
moire-free blur image. Given an image degraded with moire artifacts and a
moire-free blur image, our network predicts a moire-free clean image and a blur
kernel with a self-adaptive strategy that does not require an explicit training
stage, instead performing test-time adaptation. Our model has two sub-networks
and works iteratively. During each iteration, one sub-network takes the moire
image as input, removing moire patterns and restoring image details, and the
other sub-network estimates the blur kernel from the blur image. The two
sub-networks are jointly optimized. Extensive experiments demonstrate that our
method outperforms state-of-the-art methods and can produce high-quality
demoired results. It can generalize well to the task of removing moire
artifacts caused by display screens. In addition, we build a new moire dataset,
including images with screen and texture moire artifacts. As far as we know,
this is the first dataset with real texture moire patterns.
Related papers
- Learning Image Demoireing from Unpaired Real Data [55.273845966244714]
This paper focuses on addressing the issue of image demoireing.
We attempt to learn a demoireing model from unpaired real data, i.e., moire images associated with irrelevant clean images.
We introduce an adaptive denoise method to eliminate the low-quality pseudo moire images that adversely impact the learning of demoireing models.
arXiv Detail & Related papers (2024-01-05T09:26:35Z) - Learning Subject-Aware Cropping by Outpainting Professional Photos [69.0772948657867]
We propose a weakly-supervised approach to learn what makes a high-quality subject-aware crop from professional stock images.
Our insight is to combine a library of stock images with a modern, pre-trained text-to-image diffusion model.
We are able to automatically generate a large dataset of cropped-uncropped training pairs to train a cropping model.
arXiv Detail & Related papers (2023-12-19T11:57:54Z) - Saliency Guided Contrastive Learning on Scene Images [71.07412958621052]
We leverage the saliency map derived from the model's output during learning to highlight discriminative regions and guide the whole contrastive learning.
Our method significantly improves the performance of self-supervised learning on scene images by +1.1, +4.3, +2.2 Top1 accuracy in ImageNet linear evaluation, Semi-supervised learning with 1% and 10% ImageNet labels, respectively.
arXiv Detail & Related papers (2023-02-22T15:54:07Z) - Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple
Masks [14.82498499423046]
A new unsupervised learning method of depth and ego-motion using multiple masks from monocular video is proposed in this paper.
The depth estimation network and the ego-motion estimation network are trained according to the constraints of depth and ego-motion without truth values.
The experiments on KITTI dataset show our method achieves good performance in terms of depth and ego-motion.
arXiv Detail & Related papers (2021-04-01T12:29:23Z) - Unsupervised Single-Image Reflection Separation Using Perceptual Deep
Image Priors [6.333390830515411]
We propose a novel unsupervised framework for single-image reflection separation.
We optimize the parameters of two cross-coupled deep convolutional networks on a target image to generate two exclusive background and reflection layers.
Our results show that our method significantly outperforms the closest unsupervised method in the literature for removing reflections from single images.
arXiv Detail & Related papers (2020-09-01T21:08:30Z) - Wavelet-Based Dual-Branch Network for Image Demoireing [148.91145614517015]
We design a wavelet-based dual-branch network (WDNet) with a spatial attention mechanism for image demoireing.
Our network removes moire patterns in the wavelet domain to separate the frequencies of moire patterns from the image content.
Experiments demonstrate the effectiveness of our method, and we further show that WDNet generalizes to removing moire artifacts on non-screen images.
arXiv Detail & Related papers (2020-07-14T16:44:30Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.