Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events
- URL: http://arxiv.org/abs/2304.02695v1
- Date: Wed, 5 Apr 2023 18:44:17 GMT
- Title: Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events
- Authors: Zhangyi Cheng, Xiang Zhang, Lei Yu, Jianzhuang Liu, Wen Yang, Gui-Song
Xia
- Abstract summary: An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
- Score: 58.7185835546638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims at demystifying a single motion-blurred image with events and
revealing temporally continuous scene dynamics encrypted behind motion blurs.
To achieve this end, an Implicit Video Function (IVF) is learned to represent a
single motion blurred image with concurrent events, enabling the latent sharp
image restoration of arbitrary timestamps in the range of imaging exposures.
Specifically, a dual attention transformer is proposed to efficiently leverage
merits from both modalities, i.e., the high temporal resolution of event
features and the smoothness of image features, alleviating temporal ambiguities
while suppressing the event noise. The proposed network is trained only with
the supervision of ground-truth images of limited referenced timestamps.
Motion- and texture-guided supervisions are employed simultaneously to enhance
restorations of the non-referenced timestamps and improve the overall
sharpness. Experiments on synthetic, semi-synthetic, and real-world datasets
demonstrate that our proposed method outperforms state-of-the-art methods by a
large margin in terms of both objective PSNR and SSIM measurements and
subjective evaluations.
Related papers
- Non-Uniform Exposure Imaging via Neuromorphic Shutter Control [9.519068512865463]
We propose a novel Neuromorphic Shutter Control (NSC) system to avoid motion blurs and alleviate instant noises.
To stabilize the inconsistent Signal-to-Noise Ratio (SNR), we propose an event-based image denoising network within a self-supervised learning paradigm.
arXiv Detail & Related papers (2024-04-22T08:28:41Z) - Motion Blur Decomposition with Cross-shutter Guidance [33.72961622720793]
Motion blur is an artifact under insufficient illumination where exposure time has to be prolonged so as to collect more photons for a bright enough image.
Recent researches have aimed at decomposing a blurry image into multiple sharp images with spatial and temporal coherence.
We propose to utilize the ordered scanline-wise delay in a rolling shutter image to robustify motion decomposition of a single blurry image.
arXiv Detail & Related papers (2024-04-01T13:55:40Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - Unifying Motion Deblurring and Frame Interpolation with Events [11.173687810873433]
Slow shutter speed and long exposure time of frame-based cameras often cause visual blur and loss of inter-frame information, degenerating the overall quality of captured videos.
We present a unified framework of event-based motion deblurring and frame enhancement for blurry video enhancement, where the extremely low latency of events is leveraged to alleviate motion blur and facilitate intermediate frame prediction.
By exploring the mutual constraints among blurry frames, latent images, and event streams, we further propose a self-supervised learning framework to enable network training with real-world blurry videos and events.
arXiv Detail & Related papers (2022-03-23T03:43:12Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal
Learning Meets Static Image Understanding [87.5799910153545]
Single-image super-resolution (SR) and multi-frame SR are two ways to super resolve low-resolution images.
Blind Motion Deblurring Super-Reslution Networks is proposed to learn dynamic-temporal information from single static motion-blurred images.
arXiv Detail & Related papers (2021-05-27T11:52:45Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Single Image Optical Flow Estimation with an Event Camera [38.92408855196647]
Event cameras are bio-inspired sensors that report intensity changes in microsecond resolution.
We propose a single image (potentially blurred) and events based optical flow estimation approach.
arXiv Detail & Related papers (2020-04-01T11:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.