SamplingAug: On the Importance of Patch Sampling Augmentation for Single
Image Super-Resolution
- URL: http://arxiv.org/abs/2111.15185v1
- Date: Tue, 30 Nov 2021 07:49:28 GMT
- Title: SamplingAug: On the Importance of Patch Sampling Augmentation for Single
Image Super-Resolution
- Authors: Shizun Wang, Ming Lu, Kaixin Chen, Jiaming Liu, Xiaoqi Li, Chuang
zhang, Ming Wu
- Abstract summary: We present a simple yet effective data augmentation method for image training.
We first devise a metric to evaluate the informative importance of each patch pair.
In order to reduce the computational cost for all patch pairs, we propose to optimize the calculation by integral image.
- Score: 28.089781316522284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of Deep Neural Networks (DNNs), plenty of methods based
on DNNs have been proposed for Single Image Super-Resolution (SISR). However,
existing methods mostly train the DNNs on uniformly sampled LR-HR patch pairs,
which makes them fail to fully exploit informative patches within the image. In
this paper, we present a simple yet effective data augmentation method. We
first devise a heuristic metric to evaluate the informative importance of each
patch pair. In order to reduce the computational cost for all patch pairs, we
further propose to optimize the calculation of our metric by integral image,
achieving about two orders of magnitude speedup. The training patch pairs are
sampled according to their informative importance with our method. Extensive
experiments show our sampling augmentation can consistently improve the
convergence and boost the performance of various SISR architectures, including
EDSR, RCAN, RDN, SRCNN and ESPCN across different scaling factors (x2, x3, x4).
Code is available at https://github.com/littlepure2333/SamplingAug
Related papers
- Accelerating Image Super-Resolution Networks with Pixel-Level Classification [29.010136088811137]
Pixel-level for Single Image SuperResolution is a novel method designed to distribute computational resources adaptively at the pixel level.
Our method allows for performance and computational cost balance during inference without re-training.
arXiv Detail & Related papers (2024-07-31T08:53:10Z) - LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - AdaPool: Exponential Adaptive Pooling for Information-Retaining
Downsampling [82.08631594071656]
Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs)
We propose an adaptive and exponentially weighted pooling method named adaPool.
We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection.
arXiv Detail & Related papers (2021-11-01T08:50:37Z) - Cross-Scale Internal Graph Neural Network for Image Super-Resolution [147.77050877373674]
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration.
For single image super-resolution (SISR), most existing deep non-local methods only exploit similar patches within the same scale of the low-resolution (LR) input image.
This is achieved using a novel cross-scale internal graph neural network (IGNN)
arXiv Detail & Related papers (2020-06-30T10:48:40Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.