Restoration of Video Frames from a Single Blurred Image with Motion
Understanding
- URL: http://arxiv.org/abs/2104.09134v1
- Date: Mon, 19 Apr 2021 08:32:57 GMT
- Title: Restoration of Video Frames from a Single Blurred Image with Motion
Understanding
- Authors: Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Chaoning Zhang, In So
Kweon
- Abstract summary: We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
- Score: 69.90724075337194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel framework to generate clean video frames from a single
motion-blurred image. While a broad range of literature focuses on recovering a
single image from a blurred image, in this work, we tackle a more challenging
task i.e. video restoration from a blurred image. We formulate video
restoration from a single blurred image as an inverse problem by setting clean
image sequence and their respective motion as latent factors, and the blurred
image as an observation. Our framework is based on an encoder-decoder structure
with spatial transformer network modules to restore a video sequence and its
underlying motion in an end-to-end manner. We design a loss function and
regularizers with complementary properties to stabilize the training and
analyze variant models of the proposed network. The effectiveness and
transferability of our network are highlighted through a large set of
experiments on two different types of datasets: camera rotation blurs generated
from panorama scenes and dynamic motion blurs in high speed videos.
Related papers
- Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Neural Image Re-Exposure [86.42475408644822]
An improper shutter may lead to a blurry image, video discontinuity, or rolling shutter artifact.
We propose a neural network-based image re-exposure framework.
It consists of an encoder for visual latent space construction, a re-exposure module for aggregating information to neural film with a desired shutter strategy, and a decoder for 'developing' neural film into a desired image.
arXiv Detail & Related papers (2023-05-23T01:55:37Z) - Unfolding a blurred image [36.519356428362286]
We learn motion representation from sharp videos in an unsupervised manner.
We then train a convolutional recurrent video autoencoder network that performs a surrogate task of video reconstruction.
It is employed for guided training of a motion encoder for blurred images.
This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.
arXiv Detail & Related papers (2022-01-28T09:39:55Z) - Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding [34.76550131783525]
We propose a hybrid optical-digital method for video reconstruction using a single motion-blurred image.
We use a learned dynamic phase-coding in the lens aperture during the image acquisition to encode the motion trajectories.
The proposed computational camera generates a sharp frame burst of the scene at various frame rates from a single coded motion-blurred image.
arXiv Detail & Related papers (2021-12-28T02:06:44Z) - Affine-modeled video extraction from a single motion blurred image [3.0080996413230667]
A motion-blurred image is the temporal average of multiple sharp frames over the exposure time.
In this work, we report a generalized video extraction method using the affine motion modeling.
Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
arXiv Detail & Related papers (2021-04-08T13:59:14Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image
Pair [16.295479896947853]
Recovering video from a single motion-blurred image is a very ill-posed problem.
Traditional coded exposure framework is better-posed but it only samples a fraction of the space-time volume.
We propose to use the complementary information present in the fully-exposed image along with the coded exposure image to recover a high fidelity video.
arXiv Detail & Related papers (2020-10-20T06:08:42Z) - Task-agnostic Temporally Consistent Facial Video Editing [84.62351915301795]
We propose a task-agnostic, temporally consistent facial video editing framework.
Based on a 3D reconstruction model, our framework is designed to handle several editing tasks in a more unified and disentangled manner.
Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.
arXiv Detail & Related papers (2020-07-03T02:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.