Neural Sparse Representation for Image Restoration
- URL: http://arxiv.org/abs/2006.04357v1
- Date: Mon, 8 Jun 2020 05:15:17 GMT
- Title: Neural Sparse Representation for Image Restoration
- Authors: Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu,
Thomas S. Huang
- Abstract summary: Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
- Score: 116.72107034624344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the robustness and efficiency of sparse representation in sparse
coding based image restoration models, we investigate the sparsity of neurons
in deep networks. Our method structurally enforces sparsity constraints upon
hidden neurons. The sparsity constraints are favorable for gradient-based
learning algorithms and attachable to convolution layers in various networks.
Sparsity in neurons enables computation saving by only operating on non-zero
components without hurting accuracy. Meanwhile, our method can magnify
representation dimensionality and model capacity with negligible additional
computation cost. Experiments show that sparse representation is crucial in
deep neural networks for multiple image restoration tasks, including image
super-resolution, image denoising, and image compression artifacts removal.
Code is available at https://github.com/ychfan/nsr
Related papers
- Extreme Compression of Adaptive Neural Images [6.646501936980895]
Implicit Neural Representations (INRs) and Neural Fields are a novel paradigm for signal representation, from images and audio to 3D scenes and videos.
We present a novel analysis on compressing neural fields, with the focus on images.
We also introduce Adaptive Neural Images (ANI), an efficient neural representation that enables adaptation to different inference or transmission requirements.
arXiv Detail & Related papers (2024-05-27T03:54:09Z) - A Deep Learning-based Compression and Classification Technique for Whole
Slide Histopathology Images [0.31498833540989407]
We build an ensemble of neural networks that enables a compressive autoencoder in a supervised fashion to retain a denser and more meaningful representation of the input histology images.
We test the compressed images using transfer learning-based classifiers and show that they provide promising accuracy and classification performance.
arXiv Detail & Related papers (2023-05-11T22:20:05Z) - The Brain-Inspired Decoder for Natural Visual Image Reconstruction [4.433315630787158]
We propose a deep learning neural network architecture with biological properties to reconstruct visual image from spike trains.
Our model is an end-to-end decoder from neural spike trains to images.
Our results show that our method can effectively combine receptive field features to reconstruct images.
arXiv Detail & Related papers (2022-07-18T13:31:26Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Pyramid Attention Networks for Image Restoration [124.34970277136061]
Self-similarity refers to the image prior widely used in image restoration algorithms.
Recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities.
We present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid.
arXiv Detail & Related papers (2020-04-28T21:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.