Error Compensation Framework for Flow-Guided Video Inpainting
- URL: http://arxiv.org/abs/2207.10391v1
- Date: Thu, 21 Jul 2022 10:02:57 GMT
- Title: Error Compensation Framework for Flow-Guided Video Inpainting
- Authors: Jaeyeon Kang, Seoung Wug Oh, and Seon Joo Kim
- Abstract summary: We propose an Error Compensation Framework for Flow-guided Video Inpainting (ECFVI)
Our approach greatly improves the temporal consistency and the visual quality of the completed videos.
- Score: 36.626793485786095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The key to video inpainting is to use correlation information from as many
reference frames as possible. Existing flow-based propagation methods split the
video synthesis process into multiple steps: flow completion -> pixel
propagation -> synthesis. However, there is a significant drawback that the
errors in each step continue to accumulate and amplify in the next step. To
this end, we propose an Error Compensation Framework for Flow-guided Video
Inpainting (ECFVI), which takes advantage of the flow-based method and offsets
its weaknesses. We address the weakness with the newly designed flow completion
module and the error compensation network that exploits the error guidance map.
Our approach greatly improves the temporal consistency and the visual quality
of the completed videos. Experimental results show the superior performance of
our proposed method with the speed up of x6, compared to the state-of-the-art
methods. In addition, we present a new benchmark dataset for evaluation by
supplementing the weaknesses of existing test datasets.
Related papers
- Perception-Oriented Video Frame Interpolation via Asymmetric Blending [20.0024308216849]
Previous methods for Video Frame Interpolation (VFI) have encountered challenges, notably the manifestation of blur and ghosting effects.
We propose PerVFI (Perception-oriented Video Frame Interpolation) to mitigate these challenges.
Experimental results validate the superiority of PerVFI, demonstrating significant improvements in perceptual quality compared to existing methods.
arXiv Detail & Related papers (2024-04-10T02:40:17Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Flow-Guided Diffusion for Video Inpainting [15.478104117672803]
Video inpainting has been challenged by complex scenarios like large movements and low-light conditions.
Current methods, including emerging diffusion models, face limitations in quality and efficiency.
This paper introduces the Flow-Guided Diffusion model for Video Inpainting (FGDVI), a novel approach that significantly enhances temporal consistency and inpainting quality.
arXiv Detail & Related papers (2023-11-26T17:48:48Z) - Flow Guidance Deformable Compensation Network for Video Frame
Interpolation [33.106776459443275]
We propose a flow guidance deformable compensation network (FGDCN) to overcome the drawbacks of existing motion-based methods.
FGDCN decomposes the frame sampling process into two steps: a flow step and a deformation step.
Experimental results show that the proposed algorithm achieves excellent performance on various datasets with fewer parameters.
arXiv Detail & Related papers (2022-11-22T09:35:14Z) - Towards An End-to-End Framework for Flow-Guided Video Inpainting [68.71844500391023]
We propose an End-to-End framework for Flow-Guided Video Inpainting (E$2$FGVI)
The proposed method outperforms state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-04-06T08:24:47Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - Flow-edge Guided Video Completion [66.49077223104533]
Previous flow completion methods are often unable to retain the sharpness of motion boundaries.
Our method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges.
arXiv Detail & Related papers (2020-09-03T17:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.