Take a Prior from Other Tasks for Severe Blur Removal
- URL: http://arxiv.org/abs/2302.06898v1
- Date: Tue, 14 Feb 2023 08:30:51 GMT
- Title: Take a Prior from Other Tasks for Severe Blur Removal
- Authors: Pei Wang, Danna Xue, Yu Zhu, Jinqiu Sun, Qingsen Yan, Sung-eui Yoon,
Yanning Zhang
- Abstract summary: Cross-level feature learning strategy based on knowledge distillation to learn the priors.
Semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively.
Experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and ability.
- Score: 52.380201909782684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering clear structures from severely blurry inputs is a challenging
problem due to the large movements between the camera and the scene. Although
some works apply segmentation maps on human face images for deblurring, they
cannot handle natural scenes because objects and degradation are more complex,
and inaccurate segmentation maps lead to a loss of details. For general scene
deblurring, the feature space of the blurry image and corresponding sharp image
under the high-level vision task is closer, which inspires us to rely on other
tasks (e.g. classification) to learn a comprehensive prior in severe blur
removal cases. We propose a cross-level feature learning strategy based on
knowledge distillation to learn the priors, which include global contexts and
sharp local structures for recovering potential details. In addition, we
propose a semantic prior embedding layer with multi-level aggregation and
semantic attention transformation to integrate the priors effectively. We
introduce the proposed priors to various models, including the UNet and other
mainstream deblurring baselines, leading to better performance on severe blur
removal. Extensive experiments on natural image deblurring benchmarks and
real-world images, such as GoPro and RealBlur datasets, demonstrate our
method's effectiveness and generalization ability.
Related papers
- PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering [13.785484396436367]
We formulate image composition as a subject-based local editing task, solely focusing on foreground generation.
We propose PrimeComposer, a faster training-free diffuser that composites the images by well-designed attention steering across different noise levels.
Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively.
arXiv Detail & Related papers (2024-03-08T04:58:49Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - A Multi-scale Generalized Shrinkage Threshold Network for Image Blind
Deblurring in Remote Sensing [5.957520165711732]
We propose a new blind deblurring learning framework that utilizes alternating iterations of shrinkage thresholds.
We also propose a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold with a multi-scale prior feature extraction block.
arXiv Detail & Related papers (2023-09-14T08:48:44Z) - Towards High-Quality Specular Highlight Removal by Leveraging
Large-Scale Synthetic Data [45.30068102110486]
This paper aims to remove specular highlights from a single object-level image.
We propose a three-stage network to address them.
We present a large-scale synthetic dataset of object-level images.
arXiv Detail & Related papers (2023-09-12T15:10:23Z) - A Contrastive Distillation Approach for Incremental Semantic
Segmentation in Aerial Images [15.75291664088815]
A major issue concerning current deep neural architectures is known as catastrophic forgetting.
We propose a contrastive regularization, where any given input is compared with its augmented version.
We show the effectiveness of our solution on the Potsdam dataset, outperforming the incremental baseline in every test.
arXiv Detail & Related papers (2021-12-07T16:44:45Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.