Learning with Privileged Information for Efficient Image
Super-Resolution
- URL: http://arxiv.org/abs/2007.07524v1
- Date: Wed, 15 Jul 2020 07:44:18 GMT
- Title: Learning with Privileged Information for Efficient Image
Super-Resolution
- Authors: Wonkyung Lee, Junghyup Lee, Dohyung Kim, Bumsub Ham
- Abstract summary: We introduce in this paper a novel distillation framework, consisting of teacher and student networks, that allows to boost the performance of FSRCNN drastically.
The encoder in the teacher learns the degradation process, subsampling of HR images, using an imitation loss.
The student and the decoder in the teacher, having the same network architecture as FSRCNN, try to reconstruct HR images.
- Score: 35.599731963795875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have allowed remarkable advances in
single image super-resolution (SISR) over the last decade. Most SR methods
based on CNNs have focused on achieving performance gains in terms of quality
metrics, such as PSNR and SSIM, over classical approaches. They typically
require a large amount of memory and computational units. FSRCNN, consisting of
few numbers of convolutional layers, has shown promising results, while using
an extremely small number of network parameters. We introduce in this paper a
novel distillation framework, consisting of teacher and student networks, that
allows to boost the performance of FSRCNN drastically. To this end, we propose
to use ground-truth high-resolution (HR) images as privileged information. The
encoder in the teacher learns the degradation process, subsampling of HR
images, using an imitation loss. The student and the decoder in the teacher,
having the same network architecture as FSRCNN, try to reconstruct HR images.
Intermediate features in the decoder, affordable for the student to learn, are
transferred to the student through feature distillation. Experimental results
on standard benchmarks demonstrate the effectiveness and the generalization
ability of our framework, which significantly boosts the performance of FSRCNN
as well as other SR methods. Our code and model are available online:
https://cvlab.yonsei.ac.kr/projects/PISR.
Related papers
- Feature-domain Adaptive Contrastive Distillation for Efficient Single
Image Super-Resolution [3.2453621806729234]
CNN-based SISR has numerous parameters and high computational cost to achieve better performance.
Knowledge Distillation (KD) transfers teacher's useful knowledge to student.
We propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks.
arXiv Detail & Related papers (2022-11-29T06:24:14Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Image Super-resolution with An Enhanced Group Convolutional Neural
Network [102.2483249598621]
CNNs with strong learning abilities are widely chosen to resolve super-resolution problem.
We present an enhanced super-resolution group CNN (ESRGCNN) with a shallow architecture.
Experiments report that our ESRGCNN surpasses the state-of-the-arts in terms of SISR performance, complexity, execution speed, image quality evaluation and visual effect in SISR.
arXiv Detail & Related papers (2022-05-29T00:34:25Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - (ASNA) An Attention-based Siamese-Difference Neural Network with
Surrogate Ranking Loss function for Perceptual Image Quality Assessment [0.0]
Deep convolutional neural networks (DCNN) that leverage the adversarial training framework for image restoration and enhancement have significantly improved the processed images' sharpness.
It is necessary to develop a quantitative metric to reflect their performances, which is well-aligned with the perceived quality of an image.
This paper has proposed a convolutional neural network using an extension architecture of the traditional Siamese network.
arXiv Detail & Related papers (2021-05-06T09:04:21Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - ClassSR: A General Framework to Accelerate Super-Resolution Networks by
Data Characteristic [35.02837100573671]
We aim at accelerating super-resolution (SR) networks on large images (2K-8K)
We find that different image regions have different restoration difficulties and can be processed by networks with different capacities.
We propose a new solution pipeline -- ClassSR that combines classification and SR in a unified framework.
arXiv Detail & Related papers (2021-03-06T06:00:31Z) - Cascade Convolutional Neural Network for Image Super-Resolution [15.650515790147189]
We propose a cascaded convolution neural network for image super-resolution (CSRCNN)
Images of different scales can be trained simultaneously and the learned network can make full use of the information resided in different scales of images.
arXiv Detail & Related papers (2020-08-24T11:34:03Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.