Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images
Alive
- URL: http://arxiv.org/abs/2305.19862v3
- Date: Thu, 14 Sep 2023 15:39:30 GMT
- Title: Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images
Alive
- Authors: Wei Shang, Dongwei Ren, Chaoyu Feng, Xiaotao Wang, Lei Lei, Wangmeng
Zuo
- Abstract summary: We propose a Self-supervised learning framework for Dual RS distortions Correction (SelfDRSC)
A DRSC network can be learned to generate a high framerate GS video only based on dual RS images with reversed distortions.
On real-world RS cases, our SelfDRSC can produce framerate high-of-the-art videos with finer correction textures and better temporary consistency.
- Score: 56.70381414277253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern consumer cameras usually employ the rolling shutter (RS) mechanism,
where images are captured by scanning scenes row-by-row, yielding RS
distortions for dynamic scenes. To correct RS distortions, existing methods
adopt a fully supervised learning manner, where high framerate global shutter
(GS) images should be collected as ground-truth supervision. In this paper, we
propose a Self-supervised learning framework for Dual reversed RS distortions
Correction (SelfDRSC), where a DRSC network can be learned to generate a high
framerate GS video only based on dual RS images with reversed distortions. In
particular, a bidirectional distortion warping module is proposed for
reconstructing dual reversed RS images, and then a self-supervised loss can be
deployed to train DRSC network by enhancing the cycle consistency between input
and reconstructed dual reversed RS images. Besides start and end RS scanning
time, GS images at arbitrary intermediate scanning time can also be supervised
in SelfDRSC, thus enabling the learned DRSC network to generate a high
framerate GS video. Moreover, a simple yet effective self-distillation strategy
is introduced in self-supervised loss for mitigating boundary artifacts in
generated GS images. On synthetic dataset, SelfDRSC achieves better or
comparable quantitative metrics in comparison to state-of-the-art methods
trained in the full supervision manner. On real-world RS cases, our SelfDRSC
can produce high framerate GS videos with finer correction textures and better
temporary consistency. The source code and trained models are made publicly
available at https://github.com/shangwei5/SelfDRSC. We also provide an
implementation in HUAWEI Mindspore at
https://github.com/Hunter-Will/SelfDRSC-mindspore.
Related papers
- SelfDRSC++: Self-Supervised Learning for Dual Reversed Rolling Shutter Correction [72.05587640928879]
We propose an enhanced Self-supervised learning framework for Dual reversed RS distortion Correction (SelfDRSC++)
We introduce a lightweight DRSC network that incorporates a bidirectional correlation matching block to refine the joint optimization of optical flows and corrected RS features.
To effectively train the DRSC network, we propose a self-supervised learning strategy that ensures cycle consistency between input and reconstructed dual reversed RS images.
arXiv Detail & Related papers (2024-08-21T08:17:22Z) - Self-supervised Learning of Event-guided Video Frame Interpolation for
Rolling Shutter Frames [6.62974666987451]
This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames.
We propose a novel self-supervised framework that leverages events to guide RS frame correction VFI in a unified framework.
arXiv Detail & Related papers (2023-06-27T14:30:25Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - Context-Aware Video Reconstruction for Rolling Shutter Cameras [52.28710992548282]
In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
arXiv Detail & Related papers (2022-05-25T17:05:47Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.