Context-Aware Video Reconstruction for Rolling Shutter Cameras
- URL: http://arxiv.org/abs/2205.12912v1
- Date: Wed, 25 May 2022 17:05:47 GMT
- Title: Context-Aware Video Reconstruction for Rolling Shutter Cameras
- Authors: Bin Fan, Yuchao Dai, Zhiyuan Zhang, Qi Liu, Mingyi He
- Abstract summary: In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
- Score: 52.28710992548282
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the ubiquity of rolling shutter (RS) cameras, it is becoming
increasingly attractive to recover the latent global shutter (GS) video from
two consecutive RS frames, which also places a higher demand on realism.
Existing solutions, using deep neural networks or optimization, achieve
promising performance. However, these methods generate intermediate GS frames
through image warping based on the RS model, which inevitably result in black
holes and noticeable motion artifacts. In this paper, we alleviate these issues
by proposing a context-aware GS video reconstruction architecture. It
facilitates the advantages such as occlusion reasoning, motion compensation,
and temporal abstraction. Specifically, we first estimate the bilateral motion
field so that the pixels of the two RS frames are warped to a common GS frame
accordingly. Then, a refinement scheme is proposed to guide the GS frame
synthesis along with bilateral occlusion masks to produce high-fidelity GS
video frames at arbitrary times. Furthermore, we derive an approximated
bilateral motion field model, which can serve as an alternative to provide a
simple but effective GS frame initialization for related tasks. Experiments on
synthetic and real data show that our approach achieves superior performance
over state-of-the-art methods in terms of objective metrics and subjective
visual quality. Code is available at \url{https://github.com/GitCVfb/CVR}.
Related papers
- Self-supervised Learning of Event-guided Video Frame Interpolation for
Rolling Shutter Frames [6.62974666987451]
This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames.
We propose a novel self-supervised framework that leverages events to guide RS frame correction VFI in a unified framework.
arXiv Detail & Related papers (2023-06-27T14:30:25Z) - Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images
Alive [56.70381414277253]
We propose a Self-supervised learning framework for Dual RS distortions Correction (SelfDRSC)
A DRSC network can be learned to generate a high framerate GS video only based on dual RS images with reversed distortions.
On real-world RS cases, our SelfDRSC can produce framerate high-of-the-art videos with finer correction textures and better temporary consistency.
arXiv Detail & Related papers (2023-05-31T13:55:00Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - Combining Internal and External Constraints for Unrolling Shutter in
Videos [10.900978946948095]
We propose a space-time solution to the RS problem.
We observe that a RS video and its corresponding GS video tend to share the exact same xt slices -- up to a known sub-frame temporal shift.
This allows to constrain the GS output video using video-specific constraints imposed by the RS input video.
arXiv Detail & Related papers (2022-07-24T12:01:27Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.