Self-Supervised Real-time Video Stabilization
- URL: http://arxiv.org/abs/2111.05980v1
- Date: Wed, 10 Nov 2021 22:49:56 GMT
- Title: Self-Supervised Real-time Video Stabilization
- Authors: Jinsoo Choi, Jaesik Park, In So Kweon
- Abstract summary: We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
- Score: 100.00816752529045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Videos are a popular media form, where online video streaming has recently
gathered much popularity. In this work, we propose a novel method of real-time
video stabilization - transforming a shaky video to a stabilized video as if it
were stabilized via gimbals in real-time. Our framework is trainable in a
self-supervised manner, which does not require data captured with special
hardware setups (i.e., two cameras on a stereo rig or additional motion
sensors). Our framework consists of a transformation estimator between given
frames for global stability adjustments, followed by scene parallax reduction
module via spatially smoothed optical flow for further stability. Then, a
margin inpainting module fills in the missing margin regions created during
stabilization to reduce the amount of post-cropping. These sequential steps
reduce distortion and margin cropping to a minimum while enhancing stability.
Hence, our approach outperforms state-of-the-art real-time video stabilization
methods as well as offline methods that require camera trajectory optimization.
Our method procedure takes approximately 24.3 ms yielding 41 fps regardless of
resolution (e.g., 480p or 1080p).
Related papers
- Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - GPU-accelerated SIFT-aided source identification of stabilized videos [63.084540168532065]
We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
arXiv Detail & Related papers (2022-07-29T07:01:31Z) - Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization [82.56853587380168]
Warping-based video stabilizers smooth camera trajectory by constraining each pixel's displacement and warp frames from unstable ones.
OVS can be integrated into existing warping-based stabilizers as a plug-and-play module to significantly improve the cropping ratio of the stabilized results.
arXiv Detail & Related papers (2021-08-20T08:07:47Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - Deep Motion Blind Video Stabilization [4.544151613454639]
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
arXiv Detail & Related papers (2020-11-19T07:26:06Z) - Real-Time Selfie Video Stabilization [47.228417712587934]
We propose a novel real-time selfie video stabilization method.
Our method is completely automatic and runs at 26 fps.
Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude.
arXiv Detail & Related papers (2020-09-04T04:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.