Deep Motion Blind Video Stabilization
- URL: http://arxiv.org/abs/2011.09697v2
- Date: Fri, 22 Oct 2021 08:28:08 GMT
- Title: Deep Motion Blind Video Stabilization
- Authors: Muhammad Kashif Ali, Sangjoon Yu, Tae Hyun Kim
- Abstract summary: This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
- Score: 4.544151613454639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the advances in the field of generative models in computer vision,
video stabilization still lacks a pure regressive deep-learning-based
formulation. Deep video stabilization is generally formulated with the help of
explicit motion estimation modules due to the lack of a dataset containing
pairs of videos with similar perspective but different motion. Therefore, the
deep learning approaches for this task have difficulties in the pixel-level
synthesis of latent stabilized frames, and resort to motion estimation modules
for indirect transformations of the unstable frames to stabilized frames,
leading to the loss of visual content near the frame boundaries. In this work,
we aim to declutter this over-complicated formulation of video stabilization
with the help of a novel dataset that contains pairs of training videos with
similar perspective but different motion, and verify its effectiveness by
successfully learning motion blind full-frame video stabilization through
employing strictly conventional generative techniques and further improve the
stability through a curriculum-learning inspired adversarial training strategy.
Through extensive experimentation, we show the quantitative and qualitative
advantages of the proposed approach to the state-of-the-art video stabilization
approaches. Moreover, our method achieves $\sim3\times$ speed-up over the
currently available fastest video stabilization methods.
Related papers
- FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation [85.29772293776395]
We introduce FRESCO, intra-frame correspondence alongside inter-frame correspondence to establish a more robust spatial-temporal constraint.
This enhancement ensures a more consistent transformation of semantically similar content across frames.
Our approach involves an explicit update of features to achieve high spatial-temporal consistency with the input video.
arXiv Detail & Related papers (2024-03-19T17:59:18Z) - Harnessing Meta-Learning for Improving Full-Frame Video Stabilization [8.208892438376388]
We introduce a novel approach to enhance the performance of pixel-level synthesis solutions for video stabilization by adapting these models to individual input video sequences.
The proposed adaptation exploits low-level visual cues during test-time to improve both the stability and quality of resulting videos.
arXiv Detail & Related papers (2024-03-06T12:31:02Z) - Video Dynamics Prior: An Internal Learning Approach for Robust Video
Enhancements [83.5820690348833]
We present a framework for low-level vision tasks that does not require any external training data corpus.
Our approach learns neural modules by optimizing over a corrupted sequence, leveraging the weights of the coherence-temporal test and statistics internal statistics.
arXiv Detail & Related papers (2023-12-13T01:57:11Z) - Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - Adaptively Meshed Video Stabilization [32.68960056325736]
This paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy.
We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem.
arXiv Detail & Related papers (2020-06-14T06:51:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.