Bringing Rolling Shutter Images Alive with Dual Reversed Distortion
- URL: http://arxiv.org/abs/2203.06451v2
- Date: Tue, 15 Mar 2022 09:34:53 GMT
- Title: Bringing Rolling Shutter Images Alive with Dual Reversed Distortion
- Authors: Zhihang Zhong, Mingdeng Cao, Xiao Sun, Zhirong Wu, Zhongyi Zhou,
Yinqiang Zheng, Stephen Lin, Imari Sato
- Abstract summary: Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
- Score: 75.78003680510193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rolling shutter (RS) distortion can be interpreted as the result of picking a
row of pixels from instant global shutter (GS) frames over time during the
exposure of the RS camera. This means that the information of each instant GS
frame is partially, yet sequentially, embedded into the row-dependent
distortion. Inspired by this fact, we address the challenging task of reversing
this process, i.e., extracting undistorted GS frames from images suffering from
RS distortion. However, since RS distortion is coupled with other factors such
as readout settings and the relative velocity of scene elements to the camera,
models that only exploit the geometric correlation between temporally adjacent
images suffer from poor generality in processing data with different readout
settings and dynamic scenes with both camera motion and object motion. In this
paper, instead of two consecutive frames, we propose to exploit a pair of
images captured by dual RS cameras with reversed RS directions for this highly
challenging task. Grounded on the symmetric and complementary nature of dual
reversed distortion, we develop a novel end-to-end model, IFED, to generate
dual optical flow sequence through iterative learning of the velocity field
during the RS time. Extensive experimental results demonstrate that IFED is
superior to naive cascade schemes, as well as the state-of-the-art which
utilizes adjacent RS images. Most importantly, although it is trained on a
synthetic dataset, IFED is shown to be effective at retrieving GS frame
sequences from real-world RS distorted images of dynamic scenes.
Related papers
- Self-supervised Learning of Event-guided Video Frame Interpolation for
Rolling Shutter Frames [6.62974666987451]
This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames.
We propose a novel self-supervised framework that leverages events to guide RS frame correction VFI in a unified framework.
arXiv Detail & Related papers (2023-06-27T14:30:25Z) - Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images
Alive [56.70381414277253]
We propose a Self-supervised learning framework for Dual RS distortions Correction (SelfDRSC)
A DRSC network can be learned to generate a high framerate GS video only based on dual RS images with reversed distortions.
On real-world RS cases, our SelfDRSC can produce framerate high-of-the-art videos with finer correction textures and better temporary consistency.
arXiv Detail & Related papers (2023-05-31T13:55:00Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - Learning Adaptive Warping for Real-World Rolling Shutter Correction [52.45689075940234]
This paper proposes the first real-world rolling shutter (RS) correction dataset, BS-RSC, and a corresponding model to correct the RS frames in a distorted video.
Mobile devices in the consumer market with CMOS-based sensors for video capture often result in rolling shutter effects when relative movements occur during the video acquisition process.
arXiv Detail & Related papers (2022-04-29T05:13:50Z) - Deep network for rolling shutter rectification [25.170821013431958]
We propose an end-to-end deep neural network for the challenging task of single image RS rectification.
Our network consists of a motion block, a trajectory module, a row block, an RS rectification module and an RS regeneration module.
Experiments on synthetic and real datasets reveal that our network outperforms prior art both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-12-12T06:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.