Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events
- URL: http://arxiv.org/abs/2304.06930v2
- Date: Wed, 19 Apr 2023 16:11:38 GMT
- Title: Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events
- Authors: Yangguang Wang, Xiang Zhang, Mingyuan Lin, Lei Yu, Boxin Shi, Wen
Yang, and Gui-Song Xia
- Abstract summary: Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
- Score: 63.984927609545856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS)
images to an undistorted high frame-rate Global Shutter (GS) video is a
severely ill-posed problem due to the missing temporal dynamic information in
both RS intra-frame scanlines and inter-frame exposures, particularly when
prior knowledge about camera/object motions is unavailable. Commonly used
artificial assumptions on scenes/motions and data-specific characteristics are
prone to producing sub-optimal solutions in real-world scenarios. To address
this challenge, we propose an event-based SDR network within a self-supervised
learning paradigm, i.e., SelfUnroll. We leverage the extremely high temporal
resolution of event cameras to provide accurate inter/intra-frame dynamic
information. Specifically, an Event-based Inter/intra-frame Compensator (E-IC)
is proposed to predict the per-pixel dynamic between arbitrary time intervals,
including the temporal transition and spatial translation. Exploring
connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual
constraints with the proposed E-IC, resulting in supervisions without
ground-truth GS images. Extensive evaluations over synthetic and real datasets
demonstrate that the proposed method achieves state-of-the-art and shows
remarkable performance for event-based RS2GS inversion in real-world scenarios.
The dataset and code are available at https://w3un.github.io/selfunroll/.
Related papers
- HR-INR: Continuous Space-Time Video Super-Resolution via Event Camera [22.208120663778043]
Continuous space-time super-resolution (C-STVSR) aims to simultaneously enhance resolution and frame rate at an arbitrary scale.
We propose a novel C-STVSR framework, called HR-INR, which captures both holistic dependencies and regional motions based on implicit neural representation (INR)
We then propose a novel INR-based decoder withtemporal embeddings to capture long-term dependencies with a larger temporal perception field.
arXiv Detail & Related papers (2024-05-22T06:51:32Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras [9.69495347826584]
We present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution.
The proposed AKF pipeline outperforms other state-of-the-art methods in both absolute intensity error (69.4% reduction) and image similarity indexes (average 35.5% improvement)
arXiv Detail & Related papers (2023-09-03T12:37:59Z) - Self-supervised Learning of Event-guided Video Frame Interpolation for
Rolling Shutter Frames [6.62974666987451]
This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames.
We propose a novel self-supervised framework that leverages events to guide RS frame correction VFI in a unified framework.
arXiv Detail & Related papers (2023-06-27T14:30:25Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - Learning Adaptive Warping for Real-World Rolling Shutter Correction [52.45689075940234]
This paper proposes the first real-world rolling shutter (RS) correction dataset, BS-RSC, and a corresponding model to correct the RS frames in a distorted video.
Mobile devices in the consumer market with CMOS-based sensors for video capture often result in rolling shutter effects when relative movements occur during the video acquisition process.
arXiv Detail & Related papers (2022-04-29T05:13:50Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.