TimeReplayer: Unlocking the Potential of Event Cameras for Video
Interpolation
- URL: http://arxiv.org/abs/2203.13859v1
- Date: Fri, 25 Mar 2022 18:57:42 GMT
- Title: TimeReplayer: Unlocking the Potential of Event Cameras for Video
Interpolation
- Authors: Weihua He, Kaichao You, Zhendong Qiao, Xu Jia, Ziyang Zhang, Wenhui
Wang, Huchuan Lu, Yaoyuan Wang, Jianxing Liao
- Abstract summary: Event camera is a new device to enable video at the presence of arbitrarily complex motion.
This paper proposes a novel TimeReplayer algorithm to interpolate videos captured by commodity cameras with events.
- Score: 78.99283105497489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recording fast motion in a high FPS (frame-per-second) requires expensive
high-speed cameras. As an alternative, interpolating low-FPS videos from
commodity cameras has attracted significant attention. If only low-FPS videos
are available, motion assumptions (linear or quadratic) are necessary to infer
intermediate frames, which fail to model complex motions. Event camera, a new
camera with pixels producing events of brightness change at the temporal
resolution of $\mu s$ $(10^{-6}$ second $)$, is a game-changing device to
enable video interpolation at the presence of arbitrarily complex motion. Since
event camera is a novel sensor, its potential has not been fulfilled due to the
lack of processing algorithms. The pioneering work Time Lens introduced event
cameras to video interpolation by designing optical devices to collect a large
amount of paired training data of high-speed frames and events, which is too
costly to scale. To fully unlock the potential of event cameras, this paper
proposes a novel TimeReplayer algorithm to interpolate videos captured by
commodity cameras with events. It is trained in an unsupervised
cycle-consistent style, canceling the necessity of high-speed training data and
bringing the additional ability of video extrapolation. Its state-of-the-art
results and demo videos in supplementary reveal the promising future of
event-based vision.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Investigating Event-Based Cameras for Video Frame Interpolation in Sports [59.755469098797406]
We present a first investigation of event-based Video Frame Interpolation (VFI) models for generating sports slow-motion videos.
Particularly, we design and implement a bi-camera recording setup, including an RGB and an event-based camera to capture sports videos, to temporally align and spatially register both cameras.
Our experimental validation demonstrates that TimeLens, an off-the-shelf event-based VFI model, can effectively generate slow-motion footage for sports videos.
arXiv Detail & Related papers (2024-07-02T15:39:08Z) - TimeRewind: Rewinding Time with Image-and-Events Video Diffusion [10.687722181495065]
This paper addresses the novel challenge of rewinding'' time from a single captured image to recover the fleeting moments missed just before the shutter button is pressed.
We overcome this challenge by leveraging the emerging technology of neuromorphic event cameras, which capture motion information with high temporal resolution.
Our proposed framework introduces an event motion adaptor conditioned on event camera data, guiding the diffusion model to generate videos that are visually coherent and physically grounded in the captured events.
arXiv Detail & Related papers (2024-03-20T17:57:02Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - EGVD: Event-Guided Video Deraining [57.59935209162314]
We propose an end-to-end learning-based network to unlock the potential of the event camera for video deraining.
We build a real-world dataset consisting of rainy videos and temporally synchronized event streams.
arXiv Detail & Related papers (2023-09-29T13:47:53Z) - EvConv: Fast CNN Inference on Event Camera Inputs For High-Speed Robot
Perception [1.3869227429939426]
Event cameras capture visual information with a high temporal resolution and a wide dynamic range.
Current convolutional neural network inference on event camera streams cannot currently perform real-time inference at the high speeds at which event cameras operate.
This paper presents EvConv, a new approach to enable fast inference on CNNs for inputs from event cameras.
arXiv Detail & Related papers (2023-03-08T15:47:13Z) - Event-guided Deblurring of Unknown Exposure Time Videos [31.992673443516235]
Event cameras can capture apparent motion with a high temporal resolution.
We propose a novel Exposure Time-based Event Selection module to selectively use event features.
Our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-13T19:46:17Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.