Overparametrization of HyperNetworks at Fixed FLOP-Count Enables Fast
Neural Image Enhancement
- URL: http://arxiv.org/abs/2105.08470v1
- Date: Tue, 18 May 2021 12:27:05 GMT
- Title: Overparametrization of HyperNetworks at Fixed FLOP-Count Enables Fast
Neural Image Enhancement
- Authors: Lorenz K. Muller
- Abstract summary: Deep convolutional neural networks can enhance images taken with small mobile camera sensors and excel at tasks like demoisaicing, denoising and super-resolution.
For practical use on mobile devices these networks often require too many FLOPs and reducing the FLOPs of a convolution layer, also reduces its parameter count.
In this paper we propose to use HyperNetworks to break the fixed ratio of FLOPs to parameters of standard convolutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks can enhance images taken with small mobile
camera sensors and excel at tasks like demoisaicing, denoising and
super-resolution. However, for practical use on mobile devices these networks
often require too many FLOPs and reducing the FLOPs of a convolution layer,
also reduces its parameter count. This is problematic in view of the recent
finding that heavily over-parameterized neural networks are often the ones that
generalize best. In this paper we propose to use HyperNetworks to break the
fixed ratio of FLOPs to parameters of standard convolutions. This allows us to
exceed previous state-of-the-art architectures in SSIM and MS-SSIM on the
Zurich RAW- to-DSLR (ZRR) data-set at > 10x reduced FLOP-count. On ZRR we
further observe generalization curves consistent with 'double-descent' behavior
at fixed FLOP-count, in the large image limit. Finally we demonstrate the same
technique can be applied to an existing network (VDN) to reduce its
computational cost while maintaining fidelity on the Smartphone Image Denoising
Dataset (SIDD). Code for key functions is given in the appendix.
Related papers
- Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Real-Time Image Demoireing on Mobile Devices [59.59997851375429]
We propose a dynamic demoireing acceleration method (DDA) towards a real-time deployment on mobile devices.
Our stimulus stems from a simple-yet-universal fact that moire patterns often unbalancedly distribute across an image.
Our method can drastically reduce the inference time, leading to a real-time image demoireing on mobile devices.
arXiv Detail & Related papers (2023-02-04T15:42:42Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - A Deeper Look into Convolutions via Pruning [9.89901717499058]
Modern architectures contain a very small number of fully-connected layers, often at the end, after multiple layers of convolutions.
Although this strategy already reduces the number of parameters, most of the convolutions can be eliminated as well, without suffering any loss in recognition performance.
In this work, we use the matrix characteristics based on eigenvalues in addition to the classical weight-based importance assignment approach for pruning to shed light on the internal mechanisms of a widely used family of CNNs.
arXiv Detail & Related papers (2021-02-04T18:55:03Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - A Light-Weighted Convolutional Neural Network for Bitemporal SAR Image
Change Detection [40.58864817923371]
We propose a lightweight neural network to reduce the computational and spatial complexity.
In the proposed network, we replace normal convolutional layers with bottleneck layers that keep the same number of channels between input and output.
We verify our light-weighted neural network on four sets of bitemporal SAR images.
arXiv Detail & Related papers (2020-05-29T04:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.