Neural Re-rendering for Full-frame Video Stabilization
- URL: http://arxiv.org/abs/2102.06205v2
- Date: Fri, 12 Feb 2021 20:32:36 GMT
- Title: Neural Re-rendering for Full-frame Video Stabilization
- Authors: Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin
Huang
- Abstract summary: We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
- Score: 144.9918806873405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing video stabilization methods either require aggressive cropping of
frame boundaries or generate distortion artifacts on the stabilized frames. In
this work, we present an algorithm for full-frame video stabilization by first
estimating dense warp fields. Full-frame stabilized frames can then be
synthesized by fusing warped contents from neighboring frames. The core
technical novelty lies in our learning-based hybrid-space fusion that
alleviates artifacts caused by optical flow inaccuracy and fast-moving objects.
We validate the effectiveness of our method on the NUS and selfie video
datasets. Extensive experiment results demonstrate the merits of our approach
over prior video stabilization methods.
Related papers
- Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization [82.56853587380168]
Warping-based video stabilizers smooth camera trajectory by constraining each pixel's displacement and warp frames from unstable ones.
OVS can be integrated into existing warping-based stabilizers as a plug-and-play module to significantly improve the cropping ratio of the stabilized results.
arXiv Detail & Related papers (2021-08-20T08:07:47Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Deep Motion Blind Video Stabilization [4.544151613454639]
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
arXiv Detail & Related papers (2020-11-19T07:26:06Z) - Adaptively Meshed Video Stabilization [32.68960056325736]
This paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy.
We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem.
arXiv Detail & Related papers (2020-06-14T06:51:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.