Blur-Attention: A boosting mechanism for non-uniform blurred image
restoration
- URL: http://arxiv.org/abs/2008.08526v1
- Date: Wed, 19 Aug 2020 16:07:06 GMT
- Title: Blur-Attention: A boosting mechanism for non-uniform blurred image
restoration
- Authors: Xiaoguang Li, Feifan Yang, Kin Man Lam, Li Zhuo, Jiafeng Li
- Abstract summary: We propose a blur-attention module to dynamically capture the spatially varying features of non-uniform blurred images.
By introducing the blur-attention network into a conditional generation adversarial framework, we propose an end-to-end blind motion deblurring method.
Experimental results show that the deblurring capability of our method achieved outstanding objective performance in terms of PSNR, SSIM, and subjective visual quality.
- Score: 27.075713246257596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic scene deblurring is a challenging problem in computer vision. It is
difficult to accurately estimate the spatially varying blur kernel by
traditional methods. Data-driven-based methods usually employ kernel-free
end-to-end mapping schemes, which are apt to overlook the kernel estimation. To
address this issue, we propose a blur-attention module to dynamically capture
the spatially varying features of non-uniform blurred images. The module
consists of a DenseBlock unit and a spatial attention unit with multi-pooling
feature fusion, which can effectively extract complex spatially varying blur
features. We design a multi-level residual connection structure to connect
multiple blur-attention modules to form a blur-attention network. By
introducing the blur-attention network into a conditional generation
adversarial framework, we propose an end-to-end blind motion deblurring method,
namely Blur-Attention-GAN (BAG), for a single image. Our method can adaptively
select the weights of the extracted features according to the spatially varying
blur features, and dynamically restore the images. Experimental results show
that the deblurring capability of our method achieved outstanding objective
performance in terms of PSNR, SSIM, and subjective visual quality. Furthermore,
by visualizing the features extracted by the blur-attention module,
comprehensive discussions are provided on its effectiveness.
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing [94.24479528298252]
DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision.
By harnessing large-scale pretrained diffusion models, we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images.
We present a challenging benchmark dataset called DragBench to evaluate the performance of interactive point-based image editing methods.
arXiv Detail & Related papers (2023-06-26T06:04:09Z) - Adaptive Graph Convolution Module for Salient Object Detection [7.278033100480174]
We propose an adaptive graph convolution module (AGCM) to deal with complex scenes.
Prototype features are extracted from the input image using a learnable region generation layer.
The proposed AGCM dramatically improves the SOD performance both quantitatively and quantitatively.
arXiv Detail & Related papers (2023-03-17T07:07:17Z) - Multi-Projection Fusion and Refinement Network for Salient Object
Detection in 360{\deg} Omnidirectional Image [141.10227079090419]
We propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360deg omnidirectional image.
MPFR-Net uses the equirectangular projection image and four corresponding cube-unfolding images as inputs.
Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-23T14:50:40Z) - A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation [12.744989551644744]
We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
arXiv Detail & Related papers (2022-08-23T03:28:21Z) - SVBR-NET: A Non-Blind Spatially Varying Defocus Blur Removal Network [2.4975981795360847]
We propose a non-blind approach for image deblurring that can deal with spatially-varying kernels.
We introduce two encoder-decoder sub-networks that are fed with the blurry image and the estimated blur map.
The network is trained with synthetically blur kernels that are augmented to emulate blur maps produced by existing blur estimation methods.
arXiv Detail & Related papers (2022-06-26T17:21:12Z) - Adaptive Single Image Deblurring [43.02281823557039]
We propose an efficient pixel adaptive and feature attentive design for handling large blur variations within and across different images.
We also propose an effective content-aware global-local filtering module that significantly improves the performance.
arXiv Detail & Related papers (2022-01-01T10:10:19Z) - Attention-Guided Progressive Neural Texture Fusion for High Dynamic
Range Image Restoration [48.02238732099032]
In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration model.
An efficient two-stream structure is proposed which separately focuses on texture feature transfer over saturated regions and multi-exposure tonal and texture feature fusion.
A progressive texture blending module is designed to blend the encoded two-stream features in a multi-scale and progressive manner.
arXiv Detail & Related papers (2021-07-13T16:07:00Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.