Pyramid Feature Alignment Network for Video Deblurring
- URL: http://arxiv.org/abs/2203.14556v1
- Date: Mon, 28 Mar 2022 07:54:21 GMT
- Title: Pyramid Feature Alignment Network for Video Deblurring
- Authors: Leitian Tao and Zhenzhong Chen
- Abstract summary: Video deblurring is a challenging task due to various causes of blurring.
Traditional methods have considered how to utilize neighboring frames by the single-scale alignment for restoration.
We propose a Pyramid Feature Alignment Network (PFAN) for video deblurring.
- Score: 63.26197177542422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video deblurring remains a challenging task due to various causes of
blurring. Traditional methods have considered how to utilize neighboring frames
by the single-scale alignment for restoration. However, they typically suffer
from misalignment caused by severe blur. In this work, we aim to better utilize
neighboring frames with high efficient feature alignment. We propose a Pyramid
Feature Alignment Network (PFAN) for video deblurring. First, the multi-scale
feature of blurry frames is extracted with the strategy of Structure-to-Detail
Downsampling (SDD) before alignment. This downsampling strategy makes the edges
sharper, which is helpful for alignment. Then we align the feature at each
scale and reconstruct the image at the corresponding scale. This strategy
effectively supervises the alignment at each scale, overcoming the problem of
propagated errors from the above scales at the alignment stage. To better
handle the challenges of complex and large motions, instead of aligning
features at each scale separately, lower-scale motion information is used to
guide the higher-scale motion estimation. Accordingly, a Cascade Guided
Deformable Alignment (CGDA) is proposed to integrate coarse motion into
deformable convolution for finer and more accurate alignment. As demonstrated
in extensive experiments, our proposed PFAN achieves superior performance with
competitive speed compared to the state-of-the-art methods.
Related papers
- Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Pointless Global Bundle Adjustment With Relative Motions Hessians [0.0]
We propose a new bundle adjustment objective which does not rely on image features' reprojection errors.
Our method averages over relative motions while implicitly incorporating the contribution of the structure in the adjustment.
We argue that this approach is an upgraded version of the motion averaging approach and demonstrate its effectiveness on both photogrammetric datasets and computer vision benchmarks.
arXiv Detail & Related papers (2023-04-11T10:20:32Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - Exploring Motion Ambiguity and Alignment for High-Quality Video Frame
Interpolation [46.02120172459727]
We propose to relax the requirement of reconstructing an intermediate frame as close to the ground-truth (GT) as possible.
We develop a texture consistency loss (TCL) upon the assumption that the interpolated content should maintain similar structures with their counterparts in the given frames.
arXiv Detail & Related papers (2022-03-19T10:37:06Z) - Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes [14.384467317051831]
We propose two novel approaches to deblurring videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames.
Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors.
arXiv Detail & Related papers (2021-08-23T07:36:49Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Adaptively Meshed Video Stabilization [32.68960056325736]
This paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy.
We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem.
arXiv Detail & Related papers (2020-06-14T06:51:23Z) - RANSAC-Flow: generic two-stage image alignment [53.11926395028508]
We show that a simple unsupervised approach performs surprisingly well across a range of tasks.
Despite its simplicity, our method shows competitive results on a range of tasks and datasets.
arXiv Detail & Related papers (2020-04-03T12:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.