Spatially-Attentive Patch-Hierarchical Network with Adaptive Sampling
for Motion Deblurring
- URL: http://arxiv.org/abs/2402.06117v1
- Date: Fri, 9 Feb 2024 01:00:09 GMT
- Title: Spatially-Attentive Patch-Hierarchical Network with Adaptive Sampling
for Motion Deblurring
- Authors: Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
- Abstract summary: We propose a pixel adaptive and feature attentive design for handling large blur variations across different spatial locations.
We show that our approach performs favorably against the state-of-the-art deblurring algorithms.
- Score: 34.751361664891235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles the problem of motion deblurring of dynamic scenes.
Although end-to-end fully convolutional designs have recently advanced the
state-of-the-art in non-uniform motion deblurring, their performance-complexity
trade-off is still sub-optimal. Most existing approaches achieve a large
receptive field by increasing the number of generic convolution layers and
kernel size. In this work, we propose a pixel adaptive and feature attentive
design for handling large blur variations across different spatial locations
and process each test image adaptively. We design a content-aware global-local
filtering module that significantly improves performance by considering not
only global dependencies but also by dynamically exploiting neighboring pixel
information. We further introduce a pixel-adaptive non-uniform sampling
strategy that implicitly discovers the difficult-to-restore regions present in
the image and, in turn, performs fine-grained refinement in a progressive
manner. Extensive qualitative and quantitative comparisons with prior art on
deblurring benchmarks demonstrate that our approach performs favorably against
the state-of-the-art deblurring algorithms.
Related papers
- A Spitting Image: Modular Superpixel Tokenization in Vision Transformers [0.0]
Vision Transformer (ViT) architectures traditionally employ a grid-based approach to tokenization independent of the semantic content of an image.
We propose a modular superpixel tokenization strategy which decouples tokenization and feature extraction.
arXiv Detail & Related papers (2024-08-14T17:28:58Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [63.54342601757723]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Motion Estimation for Large Displacements and Deformations [7.99536002595393]
Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness.
This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations.
arXiv Detail & Related papers (2022-06-24T18:53:22Z) - Deep Model-Based Super-Resolution with Non-uniform Blur [1.7188280334580197]
We propose a state-of-the-art method for super-resolution with non-uniform blur.
We first propose a fast deep plug-and-play algorithm, based on linearized ADMM splitting techniques.
We unfold our iterative algorithm into a single network and train it end-to-end.
arXiv Detail & Related papers (2022-04-21T13:57:21Z) - Adaptive Single Image Deblurring [43.02281823557039]
We propose an efficient pixel adaptive and feature attentive design for handling large blur variations within and across different images.
We also propose an effective content-aware global-local filtering module that significantly improves the performance.
arXiv Detail & Related papers (2022-01-01T10:10:19Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Style Intervention: How to Achieve Spatial Disentanglement with
Style-based Generators? [100.60938767993088]
We propose a lightweight optimization-based algorithm which could adapt to arbitrary input images and render natural translation effects under flexible objectives.
We verify the performance of the proposed framework in facial attribute editing on high-resolution images, where both photo-realism and consistency are required.
arXiv Detail & Related papers (2020-11-19T07:37:31Z) - Spatially-Attentive Patch-Hierarchical Network for Adaptive Motion
Deblurring [39.92889091819711]
We propose an efficient pixel adaptive and feature attentive design for handling large blur variations across different spatial locations.
We use a patch-hierarchical attentive architecture composed of the above module that implicitly discovers the spatial variations in the blur present in the input image.
Our design offers significant improvements over the state-of-the-art in accuracy as well as speed.
arXiv Detail & Related papers (2020-04-11T09:24:00Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.