A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation
- URL: http://arxiv.org/abs/2208.10711v1
- Date: Tue, 23 Aug 2022 03:28:21 GMT
- Title: A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation
- Authors: Shu Tang, Yang Wu, Hongxing Qin, Xianzhong Xie, Shuli Yang, Jing Wang
- Abstract summary: We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
- Score: 12.744989551644744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing deep-learning-based single image dynamic scene blind deblurring
(SIDSBD) methods usually design deep networks to directly remove the
spatially-variant motion blurs from one inputted motion blurred image, without
blur kernels estimation. In this paper, inspired by the Projective Motion Path
Blur (PMPB) model and deformable convolution, we propose a novel constrained
deformable convolutional network (CDCN) for efficient single image dynamic
scene blind deblurring, which simultaneously achieves accurate
spatially-variant motion blur kernels estimation and the high-quality image
restoration from only one observed motion blurred image. In our proposed CDCN,
we first construct a novel multi-scale multi-level multi-input multi-output
(MSML-MIMO) encoder-decoder architecture for more powerful features extraction
ability. Second, different from the DLVBD methods that use multiple consecutive
frames, a novel constrained deformable convolution reblurring (CDCR) strategy
is proposed, in which the deformable convolution is first applied to blurred
features of the inputted single motion blurred image for learning the sampling
points of motion blur kernel of each pixel, which is similar to the estimation
of the motion density function of the camera shake in the PMPB model, and then
a novel PMPB-based reblurring loss function is proposed to constrain the
learned sampling points convergence, which can make the learned sampling points
match with the relative motion trajectory of each pixel better and promote the
accuracy of the spatially-variant motion blur kernels estimation.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Gyroscope-Assisted Motion Deblurring Network [11.404195533660717]
This paper presents a framework to synthetic and restore motion blur images using Inertial Measurement Unit (IMU) data.
The framework includes a strategy for training triplet generation, and a Gyroscope-Aided Motion Deblurring (GAMD) network for blurred image restoration.
arXiv Detail & Related papers (2024-02-10T01:30:24Z) - Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring [25.36888929483233]
We propose a multi-scale network based on single-input and multiple-outputs(SIMO) for motion deblurring.
We combine the characteristics of real-world trajectories with a learnable wavelet transform module to focus on the directional continuity and frequency features of the step-by-step transitions between blurred images to sharp images.
arXiv Detail & Related papers (2023-12-29T02:59:40Z) - Video Frame Interpolation with Many-to-many Splatting and Spatial
Selective Refinement [83.60486465697318]
We propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently.
For each input frame pair, M2M has a minuscule computational overhead when interpolating an arbitrary number of in-between frames.
We extend an M2M++ framework by introducing a flexible Spatial Selective Refinement component, which allows for trading computational efficiency for quality and vice versa.
arXiv Detail & Related papers (2023-10-29T09:09:32Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - Animation from Blur: Multi-modal Blur Decomposition with Motion Guidance [83.25826307000717]
We study the challenging problem of recovering detailed motion from a single motion-red image.
Existing solutions to this problem estimate a single image sequence without considering the motion ambiguity for each region.
In this paper, we explicitly account for such motion ambiguity, allowing us to generate multiple plausible solutions all in sharp detail.
arXiv Detail & Related papers (2022-07-20T18:05:53Z) - Many-to-many Splatting for Efficient Video Frame Interpolation [80.10804399840927]
Motion-based video frame relies on optical flow to warp pixels from inputs to desired instant.
Many-to-Many (M2M) splatting framework to interpolate frames efficiently.
M2M has minuscule computational overhead when interpolating arbitrary number of in-between frames.
arXiv Detail & Related papers (2022-04-07T15:29:42Z) - Blind Non-Uniform Motion Deblurring using Atrous Spatial Pyramid
Deformable Convolution and Deblurring-Reblurring Consistency [5.994412766684843]
We propose a new architecture which consists of multiple Atrous Spatial Pyramid Deformable Convolution modules.
Multiple ASPDC modules implicitly learn the pixel-specific motion with different dilation rates in the same layer to handle movements of different magnitude.
Our experimental results show that the proposed method outperforms state-of-the-art methods on the benchmark datasets.
arXiv Detail & Related papers (2021-06-27T23:14:52Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Blur-Attention: A boosting mechanism for non-uniform blurred image
restoration [27.075713246257596]
We propose a blur-attention module to dynamically capture the spatially varying features of non-uniform blurred images.
By introducing the blur-attention network into a conditional generation adversarial framework, we propose an end-to-end blind motion deblurring method.
Experimental results show that the deblurring capability of our method achieved outstanding objective performance in terms of PSNR, SSIM, and subjective visual quality.
arXiv Detail & Related papers (2020-08-19T16:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.