HINet: Half Instance Normalization Network for Image Restoration
- URL: http://arxiv.org/abs/2105.06086v1
- Date: Thu, 13 May 2021 05:25:01 GMT
- Title: HINet: Half Instance Normalization Network for Image Restoration
- Authors: Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen
- Abstract summary: We present a novel block: Half Instance Normalization Block (HIN Block), to boost the performance of image restoration networks.
Based on HIN Block, we design a simple and powerful multi-stage network named HINet, which consists of twoworks.
With the help of HIN Block, HINet surpasses the state-of-the-art (SOTA) on various image restoration tasks.
- Score: 11.788159823037601
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the role of Instance Normalization in low-level
vision tasks. Specifically, we present a novel block: Half Instance
Normalization Block (HIN Block), to boost the performance of image restoration
networks. Based on HIN Block, we design a simple and powerful multi-stage
network named HINet, which consists of two subnetworks. With the help of HIN
Block, HINet surpasses the state-of-the-art (SOTA) on various image restoration
tasks. For image denoising, we exceed it 0.11dB and 0.28 dB in PSNR on SIDD
dataset, with only 7.5% and 30% of its multiplier-accumulator operations
(MACs), 6.8 times and 2.9 times speedup respectively. For image deblurring, we
get comparable performance with 22.5% of its MACs and 3.3 times speedup on REDS
and GoPro datasets. For image deraining, we exceed it by 0.3 dB in PSNR on the
average result of multiple datasets with 1.4 times speedup. With HINet, we won
1st place on the NTIRE 2021 Image Deblurring Challenge - Track2. JPEG
Artifacts, with a PSNR of 29.70. The code is available at
https://github.com/megvii-model/HINet.
Related papers
- GAMA-IR: Global Additive Multidimensional Averaging for Fast Image Restoration [22.53813258871828]
We introduce an image restoration network that is both fast and yields excellent image quality.
The network is designed to minimize the latency and memory consumption when executed on a standard GPU.
We exceed the state-of-the-art result on real-world SIDD denoising by 0.11dB, while being 2 to 10 times faster.
arXiv Detail & Related papers (2024-03-31T21:43:08Z) - Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation [7.539498729072623]
Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry or surface structure.
Previous research has demonstrated the effectiveness of using neural networks as INR for image compression, showcasing comparable performance to traditional methods such as JPEG.
This paper introduces Rapid-INR, a novel approach that utilizes INR for encoding and compressing images, thereby accelerating neural network training in computer vision tasks.
arXiv Detail & Related papers (2023-06-29T05:49:07Z) - RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
Quantized CNNs [9.807687918954763]
Convolutional Neural Networks (CNNs) have become the standard class of deep neural network for image processing, classification and segmentation tasks.
RedBit is an open-source framework that provides a transparent, easy-to-use interface to evaluate the effectiveness of different algorithms on network accuracy.
arXiv Detail & Related papers (2023-01-15T21:27:35Z) - N2V2 -- Fixing Noise2Void Checkerboard Artifacts with Modified Sampling
Strategies and a Tweaked Network Architecture [66.03918859810022]
We present two modifications to the vanilla N2V setup that both help to reduce the unwanted artifacts considerably.
We validate our modifications on a range of microscopy and natural image data.
arXiv Detail & Related papers (2022-11-15T21:12:09Z) - Simple Baselines for Image Restoration [79.48718779396971]
We propose a simple baseline that exceeds the state-of-the-art (SOTA) methods and is computationally efficient.
We derive a Activation Free Network, namely NAFNet, from the baseline.
SOTA results are achieved on various challenging benchmarks, e.g. 33.69 dB PSNR on GoPro (for image deblurring), exceeding the previous SOTA 0.38 dB with only 8.4% of its computational costs; 40.30 dB PSNR on SIDD (for image denoising), exceeding the previous SOTA 0.28 dB with less than half of its computational costs.
arXiv Detail & Related papers (2022-04-10T12:48:38Z) - NanoBatch DPSGD: Exploring Differentially Private learning on ImageNet
with low batch sizes on the IPU [56.74644007407562]
We show that low batch sizes using group normalization on ResNet-50 can yield high accuracy and privacy on Graphcore IPUs.
This enables DPSGD training on ResNet-50 on ImageNet in just 6 hours on an IPU-POD16 system.
arXiv Detail & Related papers (2021-09-24T20:59:04Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Fixing the train-test resolution discrepancy: FixEfficientNet [98.64315617109344]
This paper provides an analysis of the performance of the EfficientNet image classifiers with several recent training procedures.
The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters.
arXiv Detail & Related papers (2020-03-18T14:22:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.