Softmax Splatting for Video Frame Interpolation
- URL: http://arxiv.org/abs/2003.05534v1
- Date: Wed, 11 Mar 2020 21:38:56 GMT
- Title: Softmax Splatting for Video Frame Interpolation
- Authors: Simon Niklaus, Feng Liu
- Abstract summary: Differentable image sampling has seen broad adoption in tasks like depth estimation and optical flow prediction.
We propose softmax splatting to address this paradigm shift and show its effectiveness on the application of frame geometry.
We show that our synthesis approach, empowered by softmax splatting, achieves new state-of-the-art results for video frame geometry.
- Score: 14.815903726643011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentiable image sampling in the form of backward warping has seen broad
adoption in tasks like depth estimation and optical flow prediction. In
contrast, how to perform forward warping has seen less attention, partly due to
additional challenges such as resolving the conflict of mapping multiple pixels
to the same target location in a differentiable way. We propose softmax
splatting to address this paradigm shift and show its effectiveness on the
application of frame interpolation. Specifically, given two input frames, we
forward-warp the frames and their feature pyramid representations based on an
optical flow estimate using softmax splatting. In doing so, the softmax
splatting seamlessly handles cases where multiple source pixels map to the same
target location. We then use a synthesis network to predict the interpolation
result from the warped representations. Our softmax splatting allows us to not
only interpolate frames at an arbitrary time but also to fine tune the feature
pyramid and the optical flow. We show that our synthesis approach, empowered by
softmax splatting, achieves new state-of-the-art results for video frame
interpolation.
Related papers
- Video Frame Interpolation with Many-to-many Splatting and Spatial
Selective Refinement [83.60486465697318]
We propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently.
For each input frame pair, M2M has a minuscule computational overhead when interpolating an arbitrary number of in-between frames.
We extend an M2M++ framework by introducing a flexible Spatial Selective Refinement component, which allows for trading computational efficiency for quality and vice versa.
arXiv Detail & Related papers (2023-10-29T09:09:32Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - Many-to-many Splatting for Efficient Video Frame Interpolation [80.10804399840927]
Motion-based video frame relies on optical flow to warp pixels from inputs to desired instant.
Many-to-Many (M2M) splatting framework to interpolate frames efficiently.
M2M has minuscule computational overhead when interpolating arbitrary number of in-between frames.
arXiv Detail & Related papers (2022-04-07T15:29:42Z) - FILM: Frame Interpolation for Large Motion [20.04001872133824]
We present a frame algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion.
Our approach outperforms state-of-the-art methods on the Xiph large motion benchmark.
arXiv Detail & Related papers (2022-02-10T08:48:18Z) - Splatting-based Synthesis for Video Frame Interpolation [22.927938232020367]
An effective approach to perform frame warping is based on splatting.
We propose to solely rely on splatting to synthesize the output without any subsequent refinement.
This splatting-based synthesis is much faster than similar approaches, especially for multi-frame synthesis.
arXiv Detail & Related papers (2022-01-25T03:31:15Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - ALANET: Adaptive Latent Attention Network forJoint Video Deblurring and
Interpolation [38.52446103418748]
We introduce a novel architecture, Adaptive Latent Attention Network (ALANET), which synthesizes sharp high frame-rate videos.
We employ combination of self-attention and cross-attention module between consecutive frames in the latent space to generate optimized representation for each frame.
Our method performs favorably against various state-of-the-art approaches, even though we tackle a much more difficult problem.
arXiv Detail & Related papers (2020-08-31T21:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.