Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video
- URL: http://arxiv.org/abs/2210.03040v1
- Date: Thu, 6 Oct 2022 16:47:12 GMT
- Title: Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video
- Authors: Bin Fan, Yuchao Dai and Hongdong Li
- Abstract summary: This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
- Score: 111.08121952640766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A single rolling-shutter (RS) image may be viewed as a row-wise combination
of a sequence of global-shutter (GS) images captured by a (virtual) moving GS
camera within the exposure duration. Although RS cameras are widely used, the
RS effect causes obvious image distortion especially in the presence of fast
camera motion, hindering downstream computer vision tasks. In this paper, we
propose to invert the RS image capture mechanism, i.e., recovering a continuous
high framerate GS video from two time-consecutive RS frames. We call this task
the RS temporal super-resolution (RSSR) problem. The RSSR is a very challenging
task, and to our knowledge, no practical solution exists to date. This paper
presents a novel deep-learning based solution. By leveraging the multi-view
geometry relationship of the RS imaging process, our learning-based framework
successfully achieves high framerate GS generation. Specifically, three novel
contributions can be identified: (i) novel formulations for bidirectional RS
undistortion flows under constant velocity as well as constant acceleration
motion model. (ii) a simple linear scaling operation, which bridges the RS
undistortion flow and regular optical flow. (iii) a new mutual conversion
scheme between varying RS undistortion flows that correspond to different
scanlines. Our method also exploits the underlying spatial-temporal geometric
relationships within a deep learning framework, where no additional supervision
is required beyond the necessary middle-scanline GS image. Building upon these
contributions, we represent the very first rolling-shutter temporal
super-resolution deep-network that is able to recover high framerate GS videos
from just two RS frames. Extensive experimental results on both synthetic and
real data show that our proposed method can produce high-quality GS image
sequences with rich details, outperforming the state-of-the-art methods.
Related papers
- SelfDRSC++: Self-Supervised Learning for Dual Reversed Rolling Shutter Correction [72.05587640928879]
We propose an enhanced Self-supervised learning framework for Dual reversed RS distortion Correction (SelfDRSC++)
We introduce a lightweight DRSC network that incorporates a bidirectional correlation matching block to refine the joint optimization of optical flows and corrected RS features.
To effectively train the DRSC network, we propose a self-supervised learning strategy that ensures cycle consistency between input and reconstructed dual reversed RS images.
arXiv Detail & Related papers (2024-08-21T08:17:22Z) - Self-supervised Learning of Event-guided Video Frame Interpolation for
Rolling Shutter Frames [6.62974666987451]
This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames.
We propose a novel self-supervised framework that leverages events to guide RS frame correction VFI in a unified framework.
arXiv Detail & Related papers (2023-06-27T14:30:25Z) - Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images
Alive [56.70381414277253]
We propose a Self-supervised learning framework for Dual RS distortions Correction (SelfDRSC)
A DRSC network can be learned to generate a high framerate GS video only based on dual RS images with reversed distortions.
On real-world RS cases, our SelfDRSC can produce framerate high-of-the-art videos with finer correction textures and better temporary consistency.
arXiv Detail & Related papers (2023-05-31T13:55:00Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - Context-Aware Video Reconstruction for Rolling Shutter Cameras [52.28710992548282]
In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
arXiv Detail & Related papers (2022-05-25T17:05:47Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.