A Lightweight Recurrent Learning Network for Sustainable Compressed
Sensing
- URL: http://arxiv.org/abs/2304.11674v1
- Date: Sun, 23 Apr 2023 14:54:15 GMT
- Title: A Lightweight Recurrent Learning Network for Sustainable Compressed
Sensing
- Authors: Yu Zhou, Yu Chen, Xiao Zhang, Pan Lai, Lei Huang, Jianmin Jiang
- Abstract summary: We propose a lightweight but effective deep neural network based on recurrent learning to achieve a sustainable CS system.
Our proposed model can achieve a better reconstruction quality than existing state-of-the-art CS algorithms.
- Score: 27.964167481909588
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, deep learning-based compressed sensing (CS) has achieved great
success in reducing the sampling and computational cost of sensing systems and
improving the reconstruction quality. These approaches, however, largely
overlook the issue of the computational cost; they rely on complex structures
and task-specific operator designs, resulting in extensive storage and high
energy consumption in CS imaging systems. In this paper, we propose a
lightweight but effective deep neural network based on recurrent learning to
achieve a sustainable CS system; it requires a smaller number of parameters but
obtains high-quality reconstructions. Specifically, our proposed network
consists of an initial reconstruction sub-network and a residual reconstruction
sub-network. While the initial reconstruction sub-network has a hierarchical
structure to progressively recover the image, reducing the number of
parameters, the residual reconstruction sub-network facilitates recurrent
residual feature extraction via recurrent learning to perform both feature
fusion and deep reconstructions across different scales. In addition, we also
demonstrate that, after the initial reconstruction, feature maps with reduced
sizes are sufficient to recover the residual information, and thus we achieved
a significant reduction in the amount of memory required. Extensive experiments
illustrate that our proposed model can achieve a better reconstruction quality
than existing state-of-the-art CS algorithms, and it also has a smaller number
of network parameters than these algorithms. Our source codes are available at:
https://github.com/C66YU/CSRN.
Related papers
- MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated
Convolution for Image Compressive Sensing (CS) [0.0]
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods.
We develop an interpretable and concise neural network model for reconstructing natural images using CS.
The model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods.
arXiv Detail & Related papers (2024-01-05T16:25:58Z) - SST-ReversibleNet: Reversible-prior-based Spectral-Spatial Transformer
for Efficient Hyperspectral Image Reconstruction [15.233185887461826]
A novel framework called the reversible-prior-based method is proposed.
ReversibleNet significantly outperforms state-of-the-art methods on simulated and real HSI datasets.
arXiv Detail & Related papers (2023-05-06T14:01:02Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - JSRNN: Joint Sampling and Reconstruction Neural Networks for High
Quality Image Compressed Sensing [8.902545322578925]
Two sub-networks, which are the sampling sub-network and the reconstruction sub-network, are included in the proposed framework.
In the reconstruction sub-network, a cascade network combining stacked denoising autoencoder (SDA) and convolutional neural network (CNN) is designed to reconstruct signals.
This framework outperforms many other state-of-the-art methods, especially at low sampling rates.
arXiv Detail & Related papers (2022-11-11T02:20:30Z) - Structured Sparsity Learning for Efficient Video Super-Resolution [99.1632164448236]
We develop a structured pruning scheme called Structured Sparsity Learning (SSL) according to the properties of video super-resolution (VSR) models.
In SSL, we design pruning schemes for several key components in VSR models, including residual blocks, recurrent networks, and upsampling networks.
arXiv Detail & Related papers (2022-06-15T17:36:04Z) - Sparse-View CT Reconstruction using Recurrent Stacked Back Projection [3.91278924473622]
We introduce a direct-reconstruction method called Recurrent Stacked Back Projection (RSBP)
RSBP uses sequentially-acquired backprojections of individual views as input to a recurrent convolutional LSTM network.
We demonstrate that RSBP outperforms both post-processing of FBP images and basic MBIR, with a lower computational cost than MBIR.
arXiv Detail & Related papers (2021-12-09T15:44:35Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Sub-Pixel Back-Projection Network For Lightweight Single Image
Super-Resolution [17.751425965791718]
We study reducing the number of parameters and computational cost of CNN-based SISR methods.
We introduce a novel network architecture for SISR, which strikes a good trade-off between reconstruction quality and low computational complexity.
arXiv Detail & Related papers (2020-08-03T18:15:16Z) - Attention Based Real Image Restoration [48.933507352496726]
Deep convolutional neural networks perform better on images containing synthetic degradations.
This paper proposes a novel single-stage blind real image restoration network (R$2$Net)
arXiv Detail & Related papers (2020-04-26T04:21:49Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.