Splatting-based Synthesis for Video Frame Interpolation
- URL: http://arxiv.org/abs/2201.10075v1
- Date: Tue, 25 Jan 2022 03:31:15 GMT
- Title: Splatting-based Synthesis for Video Frame Interpolation
- Authors: Simon Niklaus, Ping Hu, Jiawen Chen
- Abstract summary: An effective approach to perform frame warping is based on splatting.
We propose to solely rely on splatting to synthesize the output without any subsequent refinement.
This splatting-based synthesis is much faster than similar approaches, especially for multi-frame synthesis.
- Score: 22.927938232020367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Frame interpolation is an essential video processing technique that adjusts
the temporal resolution of an image sequence. An effective approach to perform
frame interpolation is based on splatting, also known as forward warping.
Specifically, splatting can be used to warp the input images to an arbitrary
temporal location based on an optical flow estimate. A synthesis network, also
sometimes referred to as refinement network, can then be used to generate the
output frame from the warped images. In doing so, it is common to not only warp
the images but also various feature representations which provide rich
contextual cues to the synthesis network. However, while this approach has been
shown to work well and enables arbitrary-time interpolation due to using
splatting, the involved synthesis network is prohibitively slow. In contrast,
we propose to solely rely on splatting to synthesize the output without any
subsequent refinement. This splatting-based synthesis is much faster than
similar approaches, especially for multi-frame interpolation, while enabling
new state-of-the-art results at high resolutions.
Related papers
- Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - High-Fidelity Guided Image Synthesis with Latent Diffusion Models [50.39294302741698]
The proposed approach outperforms the previous state-of-the-art by over 85.32% on the overall user satisfaction scores.
Human user study results show that the proposed approach outperforms the previous state-of-the-art by over 85.32% on the overall user satisfaction scores.
arXiv Detail & Related papers (2022-11-30T15:43:20Z) - Neighbor Correspondence Matching for Flow-based Video Frame Synthesis [90.14161060260012]
We introduce a neighbor correspondence matching (NCM) algorithm for flow-based frame synthesis.
NCM is performed in a current-frame-agnostic fashion to establish multi-scale correspondences in the spatial-temporal neighborhoods of each pixel.
coarse-scale module is designed to leverage neighbor correspondences to capture large motion, while the fine-scale module is more efficient to speed up the estimation process.
arXiv Detail & Related papers (2022-07-14T09:17:00Z) - IFRNet: Intermediate Feature Refine Network for Efficient Frame
Interpolation [44.04110765492441]
We devise an efficient encoder-decoder based network, termed IFRNet, for fast intermediate frame synthesizing.
Experiments on various benchmarks demonstrate the excellent performance and fast inference speed of proposed approaches.
arXiv Detail & Related papers (2022-05-29T10:18:18Z) - Content-aware Warping for View Synthesis [110.54435867693203]
We propose content-aware warping, which adaptively learns the weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network.
Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from two source views.
Experimental results on structured light field datasets with wide baselines and unstructured multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-01-22T11:35:05Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - The Invertible U-Net for Optical-Flow-free Video Interframe Generation [31.100044730381047]
In this paper, we try to tackle the video interframe generation problem without using problematic optical flow.
We propose a learning method with a new consistency loss in the latent space to maintain semantic temporal consistency between frames.
The resolution of the generated image is guaranteed to be identical to that of the original images by using an invertible network.
arXiv Detail & Related papers (2021-03-17T11:37:10Z) - Softmax Splatting for Video Frame Interpolation [14.815903726643011]
Differentable image sampling has seen broad adoption in tasks like depth estimation and optical flow prediction.
We propose softmax splatting to address this paradigm shift and show its effectiveness on the application of frame geometry.
We show that our synthesis approach, empowered by softmax splatting, achieves new state-of-the-art results for video frame geometry.
arXiv Detail & Related papers (2020-03-11T21:38:56Z) - Blurry Video Frame Interpolation [57.77512131536132]
We propose a blurry video frame method to reduce blur motion and up-convert frame rate simultaneously.
Specifically, we develop a pyramid module to cyclically synthesize clear intermediate frames.
Our method performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-02-27T17:00:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.