Self-Supervised Linear Motion Deblurring
- URL: http://arxiv.org/abs/2002.04070v1
- Date: Mon, 10 Feb 2020 20:15:21 GMT
- Title: Self-Supervised Linear Motion Deblurring
- Authors: Peidong Liu, Joel Janai, Marc Pollefeys, Torsten Sattler and Andreas
Geiger
- Abstract summary: Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
- Score: 112.75317069916579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motion blurry images challenge many computer vision algorithms, e.g, feature
detection, motion estimation, or object recognition. Deep convolutional neural
networks are state-of-the-art for image deblurring. However, obtaining training
data with corresponding sharp and blurry image pairs can be difficult. In this
paper, we present a differentiable reblur model for self-supervised motion
deblurring, which enables the network to learn from real-world blurry image
sequences without relying on sharp images for supervision. Our key insight is
that motion cues obtained from consecutive images yield sufficient information
to inform the deblurring task. We therefore formulate deblurring as an inverse
rendering problem, taking into account the physical image formation process: we
first predict two deblurred images from which we estimate the corresponding
optical flow. Using these predictions, we re-render the blurred images and
minimize the difference with respect to the original blurry inputs. We use both
synthetic and real dataset for experimental evaluations. Our experiments
demonstrate that self-supervised single image deblurring is really feasible and
leads to visually compelling results.
Related papers
- Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Blind Motion Deblurring with Pixel-Wise Kernel Estimation via Kernel
Prediction Networks [0.0]
We propose a learning-based motion deblurring method based on dense non-uniform motion blur estimation.
We train the networks on sharp/blurry pairs synthesized according to a convolution-based, non-uniform motion blur degradation model.
arXiv Detail & Related papers (2023-08-05T20:23:13Z) - Rethinking Motion Deblurring Training: A Segmentation-Based Method for
Simulating Non-Uniform Motion Blurred Images [0.0]
We propose an efficient procedural methodology to generate sharp/blurred image pairs.
This allows generating virtually unlimited realistic and diverse training pairs.
We observed superior generalization performance for the ultimate task of deblurring real motion-blurred photos.
arXiv Detail & Related papers (2022-09-26T13:20:35Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple
Masks [14.82498499423046]
A new unsupervised learning method of depth and ego-motion using multiple masks from monocular video is proposed in this paper.
The depth estimation network and the ego-motion estimation network are trained according to the constraints of depth and ego-motion without truth values.
The experiments on KITTI dataset show our method achieves good performance in terms of depth and ego-motion.
arXiv Detail & Related papers (2021-04-01T12:29:23Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.