Flow Guidance Deformable Compensation Network for Video Frame
Interpolation
- URL: http://arxiv.org/abs/2211.12117v1
- Date: Tue, 22 Nov 2022 09:35:14 GMT
- Title: Flow Guidance Deformable Compensation Network for Video Frame
Interpolation
- Authors: Pengcheng Lei, Faming Fang and Guixu Zhang
- Abstract summary: We propose a flow guidance deformable compensation network (FGDCN) to overcome the drawbacks of existing motion-based methods.
FGDCN decomposes the frame sampling process into two steps: a flow step and a deformation step.
Experimental results show that the proposed algorithm achieves excellent performance on various datasets with fewer parameters.
- Score: 33.106776459443275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion-based video frame interpolation (VFI) methods have made remarkable
progress with the development of deep convolutional networks over the past
years. While their performance is often jeopardized by the inaccuracy of flow
map estimation, especially in the case of large motion and occlusion. In this
paper, we propose a flow guidance deformable compensation network (FGDCN) to
overcome the drawbacks of existing motion-based methods. FGDCN decomposes the
frame sampling process into two steps: a flow step and a deformation step.
Specifically, the flow step utilizes a coarse-to-fine flow estimation network
to directly estimate the intermediate flows and synthesizes an anchor frame
simultaneously. To ensure the accuracy of the estimated flow, a distillation
loss and a task-oriented loss are jointly employed in this step. Under the
guidance of the flow priors learned in step one, the deformation step designs a
pyramid deformable compensation network to compensate for the missing details
of the flow step. In addition, a pyramid loss is proposed to supervise the
model in both the image and frequency domain. Experimental results show that
the proposed algorithm achieves excellent performance on various datasets with
fewer parameters.
Related papers
- FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - Rolling Shutter Correction with Intermediate Distortion Flow Estimation [55.59359977619609]
This paper proposes to correct the rolling shutter (RS) distorted images by estimating the distortion flow from the global shutter (GS) to RS directly.
Existing methods usually perform correction using the undistortion flow from the RS to GS.
We introduce a new framework that directly estimates the distortion flow and rectifies the RS image with the backward warping operation.
arXiv Detail & Related papers (2024-04-09T14:40:54Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - Enhanced Correlation Matching based Video Frame Interpolation [5.304928339627251]
We propose a novel framework called the Enhanced Correlation Matching based Video Frame Interpolation Network.
The proposed scheme employs the recurrent pyramid architecture that shares the parameters among each pyramid layer for optical flow estimation.
Experiment results demonstrate that the proposed scheme outperforms the previous works at 4K video data and low-resolution benchmark datasets as well as in terms of objective and subjective quality.
arXiv Detail & Related papers (2021-11-17T02:43:45Z) - FDAN: Flow-guided Deformable Alignment Network for Video
Super-Resolution [12.844337773258678]
Flow-guided Deformable Module (FDM) is proposed to integrate optical flow into deformable convolution.
FDAN reaches the state-of-the-art performance on two benchmark datasets.
arXiv Detail & Related papers (2021-05-12T13:18:36Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z) - FlowStep3D: Model Unrolling for Self-Supervised Scene Flow Estimation [87.74617110803189]
Estimating the 3D motion of points in a scene, known as scene flow, is a core problem in computer vision.
We present a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions.
arXiv Detail & Related papers (2020-11-19T23:23:48Z) - FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for
Optical Flow Estimation [72.41370576242116]
We propose a semi-supervised Feature Pyramidal Correlation and Residual Reconstruction Network (FPCR-Net) for optical flow estimation from frame pairs.
It consists of two main modules: pyramid correlation mapping and residual reconstruction.
Experiment results show that the proposed scheme achieves the state-of-the-art performance, with improvement by 0.80, 1.15 and 0.10 in terms of average end-point error (AEE) against competing baseline methods.
arXiv Detail & Related papers (2020-01-17T07:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.