Interpreting Super-Resolution Networks with Local Attribution Maps
- URL: http://arxiv.org/abs/2011.11036v2
- Date: Sun, 22 Aug 2021 14:52:10 GMT
- Title: Interpreting Super-Resolution Networks with Local Attribution Maps
- Authors: Jinjin Gu, Chao Dong
- Abstract summary: Image super-resolution (SR) techniques have been developing rapidly, benefiting from the invention of deep networks and its successive breakthroughs.
It is acknowledged that deep learning and deep neural networks are difficult to interpret.
In this paper, we perform attribution analysis of SR networks, which aims at finding the input pixels that strongly influence the SR results.
- Score: 24.221989130005085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image super-resolution (SR) techniques have been developing rapidly,
benefiting from the invention of deep networks and its successive
breakthroughs. However, it is acknowledged that deep learning and deep neural
networks are difficult to interpret. SR networks inherit this mysterious nature
and little works make attempt to understand them. In this paper, we perform
attribution analysis of SR networks, which aims at finding the input pixels
that strongly influence the SR results. We propose a novel attribution approach
called local attribution map (LAM), which inherits the integral gradient method
yet with two unique features. One is to use the blurred image as the baseline
input, and the other is to adopt the progressive blurring function as the path
function. Based on LAM, we show that: (1) SR networks with a wider range of
involved input pixels could achieve better performance. (2) Attention networks
and non-local networks extract features from a wider range of input pixels. (3)
Comparing with the range that actually contributes, the receptive field is
large enough for most deep networks. (4) For SR networks, textures with regular
stripes or grids are more likely to be noticed, while complex semantics are
difficult to utilize. Our work opens new directions for designing SR networks
and interpreting low-level vision deep models.
Related papers
- RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Discovering "Semantics" in Super-Resolution Networks [54.45509260681529]
Super-resolution (SR) is a fundamental and representative task of low-level vision area.
It is generally thought that the features extracted from the SR network have no specific semantic information.
Can we find any "semantics" in SR networks?
arXiv Detail & Related papers (2021-08-01T09:12:44Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - A Two-Stage Attentive Network for Single Image Super-Resolution [34.450320969785935]
We propose a two-stage attentive network (TSAN) for accurate SISR in a coarse-to-fine manner.
Specifically, we design a novel multi-context attentive block (MCAB) to make the network focus on more informative contextual features.
We present an essential refined attention block (RAB) which could explore useful cues in HR space for reconstructing fine-detailed HR image.
arXiv Detail & Related papers (2021-04-21T12:20:24Z) - AdaptiveWeighted Attention Network with Camera Spectral Sensitivity
Prior for Spectral Reconstruction from RGB Images [22.26917280683572]
We propose a novel adaptive weighted attention network (AWAN) for spectral reconstruction.
AWCA and PSNL modules are developed to reallocate channel-wise feature responses.
In the NTIRE 2020 Spectral Reconstruction Challenge, our entries obtain the 1st ranking on the Clean track and the 3rd place on the Real World track.
arXiv Detail & Related papers (2020-05-19T09:21:01Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.