Motion Deblurring using Spatiotemporal Phase Aperture Coding
- URL: http://arxiv.org/abs/2002.07483v1
- Date: Tue, 18 Feb 2020 10:46:14 GMT
- Title: Motion Deblurring using Spatiotemporal Phase Aperture Coding
- Authors: Shay Elmalem, Raja Giryes and Emanuel Marom
- Abstract summary: We propose a computational imaging approach for motion deblurring.
The trajectory of the motion is encoded in an intermediate optical image.
The color cues serve as prior information for a blind deblurring process.
- Score: 34.76550131783525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motion blur is a known issue in photography, as it limits the exposure time
while capturing moving objects. Extensive research has been carried to
compensate for it. In this work, a computational imaging approach for motion
deblurring is proposed and demonstrated. Using dynamic phase-coding in the lens
aperture during the image acquisition, the trajectory of the motion is encoded
in an intermediate optical image. This encoding embeds both the motion
direction and extent by coloring the spatial blur of each object. The color
cues serve as prior information for a blind deblurring process, implemented
using a convolutional neural network (CNN) trained to utilize such coding for
image restoration. We demonstrate the advantage of the proposed approach over
blind-deblurring with no coding and other solutions that use coded acquisition,
both in simulation and real-world experiments.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Treating Motion as Option with Output Selection for Unsupervised Video
Object Segmentation [17.71871884366252]
Video object segmentation (VOS) aims to detect the most salient object in a video without external guidance about the object.
Recent methods collaboratively use motion cues extracted from optical flow maps with appearance cues extracted from RGB images.
We propose a novel motion-as-option network by treating motion cues as optional.
arXiv Detail & Related papers (2023-09-26T09:34:13Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation [12.744989551644744]
We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
arXiv Detail & Related papers (2022-08-23T03:28:21Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding [34.76550131783525]
We propose a hybrid optical-digital method for video reconstruction using a single motion-blurred image.
We use a learned dynamic phase-coding in the lens aperture during the image acquisition to encode the motion trajectories.
The proposed computational camera generates a sharp frame burst of the scene at various frame rates from a single coded motion-blurred image.
arXiv Detail & Related papers (2021-12-28T02:06:44Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.