Minimum Latency Deep Online Video Stabilization
- URL: http://arxiv.org/abs/2212.02073v3
- Date: Tue, 15 Aug 2023 07:55:10 GMT
- Title: Minimum Latency Deep Online Video Stabilization
- Authors: Zhuofan Zhang, Zhen Liu, Ping Tan, Bing Zeng, Shuaicheng Liu
- Abstract summary: We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
- Score: 77.68990069996939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel camera path optimization framework for the task of online
video stabilization. Typically, a stabilization pipeline consists of three
steps: motion estimating, path smoothing, and novel view rendering. Most
previous methods concentrate on motion estimation, proposing various global or
local motion models. In contrast, path optimization receives relatively less
attention, especially in the important online setting, where no future frames
are available. In this work, we adopt recent off-the-shelf high-quality deep
motion models for motion estimation to recover the camera trajectory and focus
on the latter two steps. Our network takes a short 2D camera path in a sliding
window as input and outputs the stabilizing warp field of the last frame in the
window, which warps the coming frame to its stabilized position. A hybrid loss
is well-defined to constrain the spatial and temporal consistency. In addition,
we build a motion dataset that contains stable and unstable motion pairs for
the training. Extensive experiments demonstrate that our approach significantly
outperforms state-of-the-art online methods both qualitatively and
quantitatively and achieves comparable performance to offline methods. Our code
and dataset are available at https://github.com/liuzhen03/NNDVS
Related papers
- Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - Deep Motion Blind Video Stabilization [4.544151613454639]
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
arXiv Detail & Related papers (2020-11-19T07:26:06Z) - Cinematic-L1 Video Stabilization with a Log-Homography Model [0.0]
We present a method for stabilizing handheld video that simulates the camera motions cinematographers achieve with equipment like tripods, dollies, and Steadicams.
Our method is computationally efficient, running at 300 fps on an iPhone XS, and yields high-quality results.
arXiv Detail & Related papers (2020-11-16T18:10:57Z) - Towards Fast, Accurate and Stable 3D Dense Face Alignment [73.01620081047336]
We propose a novel regression framework named 3DDFA-V2 which makes a balance among speed, accuracy and stability.
We present a virtual synthesis method to transform one still image to a short-video which incorporates in-plane and out-of-plane face moving.
arXiv Detail & Related papers (2020-09-21T15:37:37Z) - Real-Time Selfie Video Stabilization [47.228417712587934]
We propose a novel real-time selfie video stabilization method.
Our method is completely automatic and runs at 26 fps.
Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude.
arXiv Detail & Related papers (2020-09-04T04:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.