Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization
- URL: http://arxiv.org/abs/2108.09041v1
- Date: Fri, 20 Aug 2021 08:07:47 GMT
- Title: Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization
- Authors: Yufei Xu, Jing Zhang, Dacheng Tao
- Abstract summary: Warping-based video stabilizers smooth camera trajectory by constraining each pixel's displacement and warp frames from unstable ones.
OVS can be integrated into existing warping-based stabilizers as a plug-and-play module to significantly improve the cropping ratio of the stabilized results.
- Score: 82.56853587380168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Warping-based video stabilizers smooth camera trajectory by constraining each
pixel's displacement and warp stabilized frames from unstable ones accordingly.
However, since the view outside the boundary is not available during warping,
the resulting holes around the boundary of the stabilized frame must be
discarded (i.e., cropping) to maintain visual consistency, and thus does leads
to a tradeoff between stability and cropping ratio. In this paper, we make a
first attempt to address this issue by proposing a new Out-of-boundary View
Synthesis (OVS) method. By the nature of spatial coherence between adjacent
frames and within each frame, OVS extrapolates the out-of-boundary view by
aligning adjacent frames to each reference one. Technically, it first
calculates the optical flow and propagates it to the outer boundary region
according to the affinity, and then warps pixels accordingly. OVS can be
integrated into existing warping-based stabilizers as a plug-and-play module to
significantly improve the cropping ratio of the stabilized results. In
addition, stability is improved because the jitter amplification effect caused
by cropping and resizing is reduced. Experimental results on the NUS benchmark
show that OVS can improve the performance of five representative
state-of-the-art methods in terms of objective metrics and subjective visual
quality. The code is publicly available at
https://github.com/Annbless/OVS_Stabilization.
Related papers
- Correspondence-Guided SfM-Free 3D Gaussian Splatting for NVS [52.3215552448623]
Novel View Synthesis (NVS) without Structure-from-Motion (SfM) pre-processed camera poses are crucial for promoting rapid response capabilities and enhancing robustness against variable operating conditions.
Recent SfM-free methods have integrated pose optimization, designing end-to-end frameworks for joint camera pose estimation and NVS.
Most existing works rely on per-pixel image loss functions, such as L2 loss.
In this study, we propose a correspondence-guided SfM-free 3D Gaussian splatting for NVS.
arXiv Detail & Related papers (2024-08-16T13:11:22Z) - 3D Multi-frame Fusion for Video Stabilization [32.42910053491574]
We present RStab, a novel framework for video stabilization that integrates 3D multi-frame fusion through volume rendering.
The core of our approach lies in Stabilized Rendering (SR), a volume rendering module, fusing multi-frame information in 3D space.
SR involves warping features and colors from multiple frames by projection, fusing them into descriptors to render the stabilized image.
In response, we introduce the Adaptive Ray Range (ARR) module to integrate depth priors, adaptively defining the sampling range for the projection process.
arXiv Detail & Related papers (2024-04-19T13:43:14Z) - Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - A Backbone Replaceable Fine-tuning Framework for Stable Face Alignment [21.696696531924374]
We propose a Jitter loss function that leverages temporal information to suppress inaccurate as well as jittered landmarks.
The proposed framework achieves at least 40% improvement on stability evaluation metrics.
It can swiftly convert a landmark detector for facial images to a better-performing one for videos without retraining the entire model.
arXiv Detail & Related papers (2020-10-19T13:40:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.