Fast Full-frame Video Stabilization with Iterative Optimization
- URL: http://arxiv.org/abs/2307.12774v2
- Date: Mon, 31 Jul 2023 08:42:20 GMT
- Title: Fast Full-frame Video Stabilization with Iterative Optimization
- Authors: Weiyue Zhao, Xin Li, Zhan Peng, Xianrui Luo, Xinyi Ye, Hao Lu, Zhiguo
Cao
- Abstract summary: We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
- Score: 21.962533235492625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video stabilization refers to the problem of transforming a shaky video into
a visually pleasing one. The question of how to strike a good trade-off between
visual quality and computational speed has remained one of the open challenges
in video stabilization. Inspired by the analogy between wobbly frames and
jigsaw puzzles, we propose an iterative optimization-based learning approach
using synthetic datasets for video stabilization, which consists of two
interacting submodules: motion trajectory smoothing and full-frame outpainting.
First, we develop a two-level (coarse-to-fine) stabilizing algorithm based on
the probabilistic flow field. The confidence map associated with the estimated
optical flow is exploited to guide the search for shared regions through
backpropagation. Second, we take a divide-and-conquer approach and propose a
novel multiframe fusion strategy to render full-frame stabilized views. An
important new insight brought about by our iterative optimization approach is
that the target video can be interpreted as the fixed point of nonlinear
mapping for video stabilization. We formulate video stabilization as a problem
of minimizing the amount of jerkiness in motion trajectories, which guarantees
convergence with the help of fixed-point theory. Extensive experimental results
are reported to demonstrate the superiority of the proposed approach in terms
of computational speed and visual quality. The code will be available on
GitHub.
Related papers
- Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - Deep Motion Blind Video Stabilization [4.544151613454639]
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
arXiv Detail & Related papers (2020-11-19T07:26:06Z) - Cinematic-L1 Video Stabilization with a Log-Homography Model [0.0]
We present a method for stabilizing handheld video that simulates the camera motions cinematographers achieve with equipment like tripods, dollies, and Steadicams.
Our method is computationally efficient, running at 300 fps on an iPhone XS, and yields high-quality results.
arXiv Detail & Related papers (2020-11-16T18:10:57Z) - Towards Fast, Accurate and Stable 3D Dense Face Alignment [73.01620081047336]
We propose a novel regression framework named 3DDFA-V2 which makes a balance among speed, accuracy and stability.
We present a virtual synthesis method to transform one still image to a short-video which incorporates in-plane and out-of-plane face moving.
arXiv Detail & Related papers (2020-09-21T15:37:37Z) - Enhanced Quadratic Video Interpolation [56.54662568085176]
We propose an enhanced quadratic video (EQVI) model to handle more complicated scenes and motion patterns.
To further boost the performance, we devise a novel multi-scale fusion network (MS-Fusion) which can be regarded as a learnable augmentation process.
The proposed EQVI model won the first place in the AIM 2020 Video Temporal Super-Resolution Challenge.
arXiv Detail & Related papers (2020-09-10T02:31:50Z) - Adaptively Meshed Video Stabilization [32.68960056325736]
This paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy.
We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem.
arXiv Detail & Related papers (2020-06-14T06:51:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.