Real-Time Selfie Video Stabilization
- URL: http://arxiv.org/abs/2009.02007v2
- Date: Wed, 16 Jun 2021 22:04:42 GMT
- Title: Real-Time Selfie Video Stabilization
- Authors: Jiyang Yu, Ravi Ramamoorthi, Keli Cheng, Michel Sarkis, Ning Bi
- Abstract summary: We propose a novel real-time selfie video stabilization method.
Our method is completely automatic and runs at 26 fps.
Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude.
- Score: 47.228417712587934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel real-time selfie video stabilization method. Our method is
completely automatic and runs at 26 fps. We use a 1D linear convolutional
network to directly infer the rigid moving least squares warping which
implicitly balances between the global rigidity and local flexibility. Our
network structure is specifically designed to stabilize the background and
foreground at the same time, while providing optional control of stabilization
focus (relative importance of foreground vs. background) to the users. To train
our network, we collect a selfie video dataset with 1005 videos, which is
significantly larger than previous selfie video datasets. We also propose a
grid approximation method to the rigid moving least squares warping that
enables the real-time frame warping. Our method is fully automatic and produces
visually and quantitatively better results than previous real-time general
video stabilization methods. Compared to previous offline selfie video methods,
our approach produces comparable quality with a speed improvement of orders of
magnitude.
Related papers
- Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes [8.061773364318313]
We present an approach to estimating camera rotation in crowded, real-world scenes from handheld monocular video.
We provide a new dataset and benchmark, with high-accuracy, rigorously verified ground truth, on 17 video sequences.
This represents a strong new performance point for crowded scenes, an important setting for computer vision.
arXiv Detail & Related papers (2023-09-15T17:44:07Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - GPU-accelerated SIFT-aided source identification of stabilized videos [63.084540168532065]
We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
arXiv Detail & Related papers (2022-07-29T07:01:31Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - Cinematic-L1 Video Stabilization with a Log-Homography Model [0.0]
We present a method for stabilizing handheld video that simulates the camera motions cinematographers achieve with equipment like tripods, dollies, and Steadicams.
Our method is computationally efficient, running at 300 fps on an iPhone XS, and yields high-quality results.
arXiv Detail & Related papers (2020-11-16T18:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.