Exploiting Non-Local Priors via Self-Convolution For Highly-Efficient
Image Restoration
- URL: http://arxiv.org/abs/2006.13714v2
- Date: Mon, 24 May 2021 06:11:12 GMT
- Title: Exploiting Non-Local Priors via Self-Convolution For Highly-Efficient
Image Restoration
- Authors: Lanqing Guo, Zhiyuan Zha, Saiprasad Ravishankar and Bihan Wen
- Abstract summary: We propose a novel Self-Convolution operator to exploit image non-local similarity in a self-supervised way.
The proposed Self-Convolution can generalize the commonly-used block matching step and produce equivalent results with much cheaper computation.
Experimental results demonstrate that Self-Convolution can significantly speed up most of the popular non-local image restoration algorithms.
- Score: 36.22821902478044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing effective image priors is critical to solving ill-posed inverse
problems in image processing and imaging. Recent works proposed to exploit
image non-local similarity for inverse problems by grouping similar patches and
demonstrated state-of-the-art results in many applications. However, compared
to classic methods based on filtering or sparsity, most of the non-local
algorithms are time-consuming, mainly due to the highly inefficient and
redundant block matching step, where the distance between each pair of
overlapping patches needs to be computed. In this work, we propose a novel
Self-Convolution operator to exploit image non-local similarity in a
self-supervised way. The proposed Self-Convolution can generalize the
commonly-used block matching step and produce equivalent results with much
cheaper computation. Furthermore, by applying Self-Convolution, we propose an
effective multi-modality image restoration scheme, which is much more efficient
than conventional block matching for non-local modeling. Experimental results
demonstrate that (1) Self-Convolution can significantly speed up most of the
popular non-local image restoration algorithms, with two-fold to nine-fold
faster block matching, and (2) the proposed multi-modality image restoration
scheme achieves superior denoising results in both efficiency and effectiveness
on RGB-NIR images. The code is publicly available at
\href{https://github.com/GuoLanqing/Self-Convolution}.
Related papers
- Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search [0.0]
We present a tensor graph rewriting approach that uses Monte Carlo tree search to build superior representation.
Our approach improves the inference speedup of neural networks by up to 11% compared to existing methods.
arXiv Detail & Related papers (2024-10-07T22:22:02Z) - Large-scale Global Low-rank Optimization for Computational Compressed
Imaging [8.594666859332124]
We present the global low-rank (GLR) optimization technique, realizing highly-efficient large-scale reconstruction with global self-similarity.
Inspired by the self-attention mechanism in deep learning, GLR extracts image patches by feature detection instead of conventional uniform selection.
We experimentally demonstrate GLR's effectiveness on temporal, frequency, and spectral dimensions.
arXiv Detail & Related papers (2023-01-08T14:12:51Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - End-to-end Interpretable Learning of Non-blind Image Deblurring [102.75982704671029]
Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients.
We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels.
arXiv Detail & Related papers (2020-07-03T15:45:01Z) - A General-Purpose Dehazing Algorithm based on Local Contrast Enhancement
Approaches [2.383083450554816]
Dehazing is the task of enhancing the image taken in foggy conditions.
We present in this document a dehazing method which is suitable for several local contrast adjustment algorithms.
arXiv Detail & Related papers (2020-05-31T17:25:22Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Image Denoising Using Sparsifying Transform Learning and Weighted
Singular Values Minimization [7.472473280743767]
In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior.
As a convex relaxation approximation of low rank, nuclear norm based algorithms and their variants have attracted significant attention.
By taking both advantages of image domain minimization and transform domain in a general framework, we propose a sparsity learning transform method.
arXiv Detail & Related papers (2020-04-02T00:30:29Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.