Adaptive Window Pruning for Efficient Local Motion Deblurring
- URL: http://arxiv.org/abs/2306.14268v2
- Date: Sun, 15 Oct 2023 13:47:34 GMT
- Title: Adaptive Window Pruning for Efficient Local Motion Deblurring
- Authors: Haoying Li, Jixin Zhao, Shangchen Zhou, Huajun Feng, Chongyi Li, Chen
Change Loy
- Abstract summary: Local motion blur commonly occurs in real-world photography due to the mixing between moving objects and stationary backgrounds during exposure.
Existing image deblurring methods predominantly focus on global deblurring.
This paper aims to adaptively and efficiently restore high-resolution locally blurred images.
- Score: 81.35217764881048
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Local motion blur commonly occurs in real-world photography due to the mixing
between moving objects and stationary backgrounds during exposure. Existing
image deblurring methods predominantly focus on global deblurring,
inadvertently affecting the sharpness of backgrounds in locally blurred images
and wasting unnecessary computation on sharp pixels, especially for
high-resolution images. This paper aims to adaptively and efficiently restore
high-resolution locally blurred images. We propose a local motion deblurring
vision Transformer (LMD-ViT) built on adaptive window pruning Transformer
blocks (AdaWPT). To focus deblurring on local regions and reduce computation,
AdaWPT prunes unnecessary windows, only allowing the active windows to be
involved in the deblurring processes. The pruning operation relies on the
blurriness confidence predicted by a confidence predictor that is trained
end-to-end using a reconstruction loss with Gumbel-Softmax re-parameterization
and a pruning loss guided by annotated blur masks. Our method removes local
motion blur effectively without distorting sharp regions, demonstrated by its
exceptional perceptual and quantitative improvements compared to
state-of-the-art methods. In addition, our approach substantially reduces FLOPs
by 66% and achieves more than a twofold increase in inference speed compared to
Transformer-based deblurring methods. We will make our code and annotated blur
masks publicly available.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization [45.20189929583484]
We decompose the deblurring (regression) task into blur pixel discretization and discrete-to-continuous conversion tasks.
Specifically, we generate the discretized image residual errors by identifying the blur pixels and then transform them to a continuous form.
arXiv Detail & Related papers (2024-04-18T13:22:56Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - CNN Injected Transformer for Image Exposure Correction [20.282217209520006]
Previous exposure correction methods based on convolutions often produce exposure deviation in images.
We propose a CNN Injected Transformer (CIT) to harness the individual strengths of CNN and Transformer simultaneously.
In addition to the hybrid architecture design for exposure correction, we apply a set of carefully formulated loss functions to improve the spatial coherence and rectify potential color deviations.
arXiv Detail & Related papers (2023-09-08T14:53:00Z) - Real-World Deep Local Motion Deblurring [14.722910597305546]
We establish the first real local motion blur dataset (ReLoBlur)
We propose a Local Blur-Aware Gated network (LBAG) and several local blur-aware techniques to bridge the gap between global and local deblurring.
arXiv Detail & Related papers (2022-04-18T06:24:02Z) - Clean Images are Hard to Reblur: A New Clue for Deblurring [56.28655168605079]
We propose a novel low-level perceptual loss to make image sharper.
To better focus on image blurriness, we train a reblurring module amplifying the unremoved motion blur.
The supervised reblurring loss at training stage compares the amplified blur between the deblurred image and the reference sharp image.
The self-blurring loss at inference stage inspects if the deblurred image still contains noticeable blur to be amplified.
arXiv Detail & Related papers (2021-04-26T15:49:21Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z) - Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis
Decomposition [1.854931308524932]
We propose a general, non-parametric model for dense non-uniform motion blur estimation.
We show that our method overcomes the limitations of existing non-uniform motion blur estimation.
arXiv Detail & Related papers (2021-02-01T18:02:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.