Deep network for rolling shutter rectification
- URL: http://arxiv.org/abs/2112.06170v1
- Date: Sun, 12 Dec 2021 06:40:34 GMT
- Title: Deep network for rolling shutter rectification
- Authors: Praveen K, Lokesh Kumar T, and A.N. Rajagopalan
- Abstract summary: We propose an end-to-end deep neural network for the challenging task of single image RS rectification.
Our network consists of a motion block, a trajectory module, a row block, an RS rectification module and an RS regeneration module.
Experiments on synthetic and real datasets reveal that our network outperforms prior art both qualitatively and quantitatively.
- Score: 25.170821013431958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CMOS sensors employ row-wise acquisition mechanism while imaging a scene,
which can result in undesired motion artifacts known as rolling shutter (RS)
distortions in the captured image. Existing single image RS rectification
methods attempt to account for these distortions by either using algorithms
tailored for specific class of scenes which warrants information of intrinsic
camera parameters or a learning-based framework with known ground truth motion
parameters. In this paper, we propose an end-to-end deep neural network for the
challenging task of single image RS rectification. Our network consists of a
motion block, a trajectory module, a row block, an RS rectification module and
an RS regeneration module (which is used only during training). The motion
block predicts camera pose for every row of the input RS distorted image while
the trajectory module fits estimated motion parameters to a third-order
polynomial. The row block predicts the camera motion that must be associated
with every pixel in the target i.e, RS rectified image. Finally, the RS
rectification module uses motion trajectory and the output of row block to warp
the input RS image to arrive at a distortionfree image. For faster convergence
during training, we additionally use an RS regeneration module which compares
the input RS image with the ground truth image distorted by estimated motion
parameters. The end-to-end formulation in our model does not constrain the
estimated motion to ground-truth motion parameters, thereby successfully
rectifying the RS images with complex real-life camera motion. Experiments on
synthetic and real datasets reveal that our network outperforms prior art both
qualitatively and quantitatively.
Related papers
- SelfDRSC++: Self-Supervised Learning for Dual Reversed Rolling Shutter Correction [72.05587640928879]
We propose an enhanced Self-supervised learning framework for Dual reversed RS distortion Correction (SelfDRSC++)
We introduce a lightweight DRSC network that incorporates a bidirectional correlation matching block to refine the joint optimization of optical flows and corrected RS features.
To effectively train the DRSC network, we propose a self-supervised learning strategy that ensures cycle consistency between input and reconstructed dual reversed RS images.
arXiv Detail & Related papers (2024-08-21T08:17:22Z) - Single Image Rolling Shutter Removal with Diffusion Models [46.57721145372241]
We present RS-Diffusion, the first Diffusion Models-based method for single-frame Rolling Shutter (RS) correction.
In this work, we present an image-to-motion'' framework via diffusion techniques, with a designed patch-attention module.
In addition, we present the RS-Real dataset, comprised of captured RS frames alongside their corresponding Global Shutter (GS) ground-truth pairs.
arXiv Detail & Related papers (2024-07-03T08:25:02Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - Data Consistent Deep Rigid MRI Motion Correction [9.551748050454378]
Motion artifacts are a pervasive problem in MRI, leading to misdiagnosis or mischaracterization in population-level imaging studies.
Current retrospective rigid intra-slice motion correction techniques jointly optimize estimates of the image and the motion parameters.
In this paper, we use a deep network to reduce the joint image-motion parameter search to a search over rigid motion parameters alone.
arXiv Detail & Related papers (2023-01-25T00:21:31Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - Learning Adaptive Warping for Real-World Rolling Shutter Correction [52.45689075940234]
This paper proposes the first real-world rolling shutter (RS) correction dataset, BS-RSC, and a corresponding model to correct the RS frames in a distorted video.
Mobile devices in the consumer market with CMOS-based sensors for video capture often result in rolling shutter effects when relative movements occur during the video acquisition process.
arXiv Detail & Related papers (2022-04-29T05:13:50Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.