Lightweight High-Speed Photography Built on Coded Exposure and Implicit
Neural Representation of Videos
- URL: http://arxiv.org/abs/2311.13134v1
- Date: Wed, 22 Nov 2023 03:41:13 GMT
- Title: Lightweight High-Speed Photography Built on Coded Exposure and Implicit
Neural Representation of Videos
- Authors: Zhihong Zhang, Runzhao Yang, Jinli Suo, Yuxiao Cheng, Qionghai Dai
- Abstract summary: coded exposure setup to encode a frame sequence into a blurry snapshot can serve as a lightweight solution.
However, restoring motion from blur is quite challenging due to the high ill-posedness of motion blur decomposition.
We develop a novel self-recursive neural network to sequentially retrieve the latent video sequence from the blurry image.
- Score: 36.64080221546024
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The compact cameras recording high-speed scenes with high resolution are
highly demanded, but the required high bandwidth often leads to bulky, heavy
systems, which limits their applications on low-capacity platforms. Adopting a
coded exposure setup to encode a frame sequence into a blurry snapshot and
retrieve the latent sharp video afterward can serve as a lightweight solution.
However, restoring motion from blur is quite challenging due to the high
ill-posedness of motion blur decomposition, intrinsic ambiguity in motion
direction, and diverse motions in natural videos. In this work, by leveraging
classical coded exposure imaging technique and emerging implicit neural
representation for videos, we tactfully embed the motion direction cues into
the blurry image during the imaging process and develop a novel self-recursive
neural network to sequentially retrieve the latent video sequence from the
blurry image utilizing the embedded motion direction cues. To validate the
effectiveness and efficiency of the proposed framework, we conduct extensive
experiments on benchmark datasets and real-captured blurry images. The results
demonstrate that our proposed framework significantly outperforms existing
methods in quality and flexibility. The code for our work is available at
https://github.com/zhihongz/BDINR
Related papers
- Towards Real-world Event-guided Low-light Video Enhancement and Deblurring [39.942568142125126]
Event cameras have emerged as a promising solution for improving image quality in low-light environments.
We introduce an end-to-end framework to effectively handle these tasks.
Our framework incorporates a module to efficiently leverage temporal information from events and frames.
arXiv Detail & Related papers (2024-08-27T09:44:54Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - Pix2HDR -- A pixel-wise acquisition and deep learning-based synthesis approach for high-speed HDR videos [2.275097126764287]
High-speed high dynamic range () video is challenging because the camera's frame rate restricts its dynamic range.
Existing methods sacrifice speed to acquire multi-exposure frames, yet misaligned motion in these frames can still pose for HDR fusion algorithms.
Our method greatly enhances the vision system's adaptability and performance in dynamic conditions.
arXiv Detail & Related papers (2023-10-24T19:27:35Z) - Neural Image Re-Exposure [86.42475408644822]
An improper shutter may lead to a blurry image, video discontinuity, or rolling shutter artifact.
We propose a neural network-based image re-exposure framework.
It consists of an encoder for visual latent space construction, a re-exposure module for aggregating information to neural film with a desired shutter strategy, and a decoder for 'developing' neural film into a desired image.
arXiv Detail & Related papers (2023-05-23T01:55:37Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding [34.76550131783525]
We propose a hybrid optical-digital method for video reconstruction using a single motion-blurred image.
We use a learned dynamic phase-coding in the lens aperture during the image acquisition to encode the motion trajectories.
The proposed computational camera generates a sharp frame burst of the scene at various frame rates from a single coded motion-blurred image.
arXiv Detail & Related papers (2021-12-28T02:06:44Z) - Restoration of Video Frames from a Single Blurred Image with Motion
Understanding [69.90724075337194]
We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
arXiv Detail & Related papers (2021-04-19T08:32:57Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Prior-enlightened and Motion-robust Video Deblurring [29.158836861982742]
PRiOr-enlightened and MOTION-robust deblurring model (PROMOTION) suitable for challenging blurs.
We use 3D group convolution to efficiently encode heterogeneous prior information.
We also design the priors representing blur distribution, to better handle non-uniform blur-temporal domain.
arXiv Detail & Related papers (2020-03-25T04:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.