Flow-edge Guided Video Completion
- URL: http://arxiv.org/abs/2009.01835v1
- Date: Thu, 3 Sep 2020 17:59:42 GMT
- Title: Flow-edge Guided Video Completion
- Authors: Chen Gao, Ayush Saraf, Jia-Bin Huang, Johannes Kopf
- Abstract summary: Previous flow completion methods are often unable to retain the sharpness of motion boundaries.
Our method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges.
- Score: 66.49077223104533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new flow-based video completion algorithm. Previous flow
completion methods are often unable to retain the sharpness of motion
boundaries. Our method first extracts and completes motion edges, and then uses
them to guide piecewise-smooth flow completion with sharp edges. Existing
methods propagate colors among local flow connections between adjacent frames.
However, not all missing regions in a video can be reached in this way because
the motion boundaries form impenetrable barriers. Our method alleviates this
problem by introducing non-local flow connections to temporally distant frames,
enabling propagating video content over motion boundaries. We validate our
approach on the DAVIS dataset. Both visual and quantitative results show that
our method compares favorably against the state-of-the-art algorithms.
Related papers
- Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Motion-inductive Self-supervised Object Discovery in Videos [99.35664705038728]
We propose a model for processing consecutive RGB frames, and infer the optical flow between any pair of frames using a layered representation.
We demonstrate superior performance over previous state-of-the-art methods on three public video segmentation datasets.
arXiv Detail & Related papers (2022-10-01T08:38:28Z) - Towards An End-to-End Framework for Flow-Guided Video Inpainting [68.71844500391023]
We propose an End-to-End framework for Flow-Guided Video Inpainting (E$2$FGVI)
The proposed method outperforms state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-04-06T08:24:47Z) - Progressive Temporal Feature Alignment Network for Video Inpainting [51.26380898255555]
Video convolution aims to fill in-temporal "corrupted regions" with plausible content.
Current methods achieve this goal through attention, flow-based warping, or 3D temporal convolution.
We propose 'Progressive Temporal Feature Alignment Network', which progressively enriches features extracted from the current frame with the warped feature from neighbouring frames.
arXiv Detail & Related papers (2021-04-08T04:50:33Z) - Video Frame Interpolation via Generalized Deformable Convolution [18.357839820102683]
Video frame aims at synthesizing intermediate frames from nearby source frames while maintaining spatial and temporal consistencies.
Existing deeplearning-based video frame methods can be divided into two categories: flow-based methods and kernel-based methods.
A novel mechanism named generalized deformable convolution is proposed, which can effectively learn motion in a data-driven manner and freely select sampling points in space-time.
arXiv Detail & Related papers (2020-08-24T20:00:39Z) - Multiple Video Frame Interpolation via Enhanced Deformable Separable
Convolution [67.83074893311218]
Kernel-based methods predict pixels with a single convolution process that convolves source frames with spatially adaptive local kernels.
We propose enhanced deformable separable convolution (EDSC) to estimate not only adaptive kernels, but also offsets, masks and biases.
We show that our method performs favorably against the state-of-the-art methods across a broad range of datasets.
arXiv Detail & Related papers (2020-06-15T01:10:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.