Deep Slow Motion Video Reconstruction with Hybrid Imaging System
- URL: http://arxiv.org/abs/2002.12106v2
- Date: Tue, 21 Apr 2020 11:05:44 GMT
- Title: Deep Slow Motion Video Reconstruction with Hybrid Imaging System
- Authors: Avinash Paliwal and Nima Khademi Kalantari
- Abstract summary: Current techniques increase the frame rate of standard videos through frame by assuming linear object motion which is not valid in challenging cases.
We propose a two-stage deep learning system consisting of alignment and appearance estimation.
We train our model on synthetically generated hybrid videos and show high-quality results on a variety of test scenes.
- Score: 12.340049542098148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Slow motion videos are becoming increasingly popular, but capturing
high-resolution videos at extremely high frame rates requires professional
high-speed cameras. To mitigate this problem, current techniques increase the
frame rate of standard videos through frame interpolation by assuming linear
object motion which is not valid in challenging cases. In this paper, we
address this problem using two video streams as input; an auxiliary video with
high frame rate and low spatial resolution, providing temporal information, in
addition to the standard main video with low frame rate and high spatial
resolution. We propose a two-stage deep learning system consisting of alignment
and appearance estimation that reconstructs high resolution slow motion video
from the hybrid video input. For alignment, we propose to compute flows between
the missing frame and two existing frames of the main video by utilizing the
content of the auxiliary video frames. For appearance estimation, we propose to
combine the warped and auxiliary frames using a context and occlusion aware
network. We train our model on synthetically generated hybrid videos and show
high-quality results on a variety of test scenes. To demonstrate practicality,
we show the performance of our system on two real dual camera setups with small
baseline.
Related papers
- HSTR-Net: Reference Based Video Super-resolution with Dual Cameras [2.4749083496491675]
This paper proposes a dual camera system for the generation of HSTR video using reference-based super-resolution (RefSR)
One camera captures high spatial resolution low frame rate (HSLF) video while the other captures low spatial resolution high frame rate (LSHF) video simultaneously for the same scene.
A novel deep learning architecture is proposed to fuse HSLF and LSHF video feeds and synthesize HSTR video frames.
arXiv Detail & Related papers (2023-10-18T16:37:01Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Memory-Augmented Non-Local Attention for Video Super-Resolution [61.55700315062226]
We propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones.
Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame.
In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment.
arXiv Detail & Related papers (2021-08-25T05:12:14Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - ALANET: Adaptive Latent Attention Network forJoint Video Deblurring and
Interpolation [38.52446103418748]
We introduce a novel architecture, Adaptive Latent Attention Network (ALANET), which synthesizes sharp high frame-rate videos.
We employ combination of self-attention and cross-attention module between consecutive frames in the latent space to generate optimized representation for each frame.
Our method performs favorably against various state-of-the-art approaches, even though we tackle a much more difficult problem.
arXiv Detail & Related papers (2020-08-31T21:11:53Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.