Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding
- URL: http://arxiv.org/abs/2112.14768v1
- Date: Tue, 28 Dec 2021 02:06:44 GMT
- Title: Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding
- Authors: Erez Yosef, Shay Elmalem, Raja Giryes
- Abstract summary: We propose a hybrid optical-digital method for video reconstruction using a single motion-blurred image.
We use a learned dynamic phase-coding in the lens aperture during the image acquisition to encode the motion trajectories.
The proposed computational camera generates a sharp frame burst of the scene at various frame rates from a single coded motion-blurred image.
- Score: 34.76550131783525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video reconstruction from a single motion-blurred image is a challenging
problem, which can enhance existing cameras' capabilities. Recently, several
works addressed this task using conventional imaging and deep learning. Yet,
such purely-digital methods are inherently limited, due to direction ambiguity
and noise sensitivity. Some works proposed to address these limitations using
non-conventional image sensors, however, such sensors are extremely rare and
expensive. To circumvent these limitations with simpler means, we propose a
hybrid optical-digital method for video reconstruction that requires only
simple modifications to existing optical systems. We use a learned dynamic
phase-coding in the lens aperture during the image acquisition to encode the
motion trajectories, which serve as prior information for the video
reconstruction process. The proposed computational camera generates a sharp
frame burst of the scene at various frame rates from a single coded
motion-blurred image, using an image-to-video convolutional neural network. We
present advantages and improved performance compared to existing methods, using
both simulations and a real-world camera prototype.
Related papers
- GANESH: Generalizable NeRF for Lensless Imaging [12.985055542373791]
We introduce GANESH, a novel framework designed to enable simultaneous refinement and novel view synthesis from lensless images.
Unlike existing methods that require scene-specific training, our approach supports on-the-fly inference without retraining on each scene.
To facilitate research in this area, we also present the first multi-view lensless dataset, LenslessScenes.
arXiv Detail & Related papers (2024-11-07T15:47:07Z) - Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos [34.152901518593396]
The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing.
However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms.
We propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos.
arXiv Detail & Related papers (2023-11-22T03:41:13Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Restoration of Video Frames from a Single Blurred Image with Motion
Understanding [69.90724075337194]
We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
arXiv Detail & Related papers (2021-04-19T08:32:57Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - FlatNet: Towards Photorealistic Scene Reconstruction from Lensless
Measurements [31.353395064815892]
We propose a non-iterative deep learning based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions.
Our approach, called $textitFlatNet$, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras.
arXiv Detail & Related papers (2020-10-29T09:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.