LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing
- URL: http://arxiv.org/abs/2212.09088v1
- Date: Sun, 18 Dec 2022 13:54:11 GMT
- Title: LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing
- Authors: Tianfang Zhang, Lei Li, Christian Igel, Stefan Oehmcke, Fabian
Gieseke, Zhenming Peng
- Abstract summary: Deep unfolding networks (DUNs) have proven to be a viable approach to compressive sensing (CS)
In this work, we propose a DUN called low-rank CS network (LR-CSNet) for natural image CS.
Our experiments on three widely considered datasets demonstrate the promising performance of LR-CSNet.
- Score: 19.74767410530179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep unfolding networks (DUNs) have proven to be a viable approach to
compressive sensing (CS). In this work, we propose a DUN called low-rank CS
network (LR-CSNet) for natural image CS. Real-world image patches are often
well-represented by low-rank approximations. LR-CSNet exploits this property by
adding a low-rank prior to the CS optimization task. We derive a corresponding
iterative optimization procedure using variable splitting, which is then
translated to a new DUN architecture. The architecture uses low-rank generation
modules (LRGMs), which learn low-rank matrix factorizations, as well as
gradient descent and proximal mappings (GDPMs), which are proposed to extract
high-frequency features to refine image details. In addition, the deep features
generated at each reconstruction stage in the DUN are transferred between
stages to boost the performance. Our extensive experiments on three widely
considered datasets demonstrate the promising performance of LR-CSNet compared
to state-of-the-art methods in natural image CS.
Related papers
- DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Efficient Model Agnostic Approach for Implicit Neural Representation
Based Arbitrary-Scale Image Super-Resolution [5.704360536038803]
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
Traditional networks are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images.
We introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales.
arXiv Detail & Related papers (2023-11-20T05:34:36Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Robust Deep Compressive Sensing with Recurrent-Residual Structural
Constraints [0.0]
Existing deep sensing (CS) methods either ignore adaptive online optimization or depend on costly iterative reconstruction.
This work explores a novel image CS framework with recurrent-residual structural constraint, termed as R$2$CS-NET.
As the first deep CS framework efficiently bridging adaptive online optimization, the R$2$CS-NET integrates the robustness of online optimization with the efficiency and nonlinear capacity of deep learning methods.
arXiv Detail & Related papers (2022-07-15T05:56:13Z) - HerosNet: Hyperspectral Explicable Reconstruction and Optimal Sampling
Deep Network for Snapshot Compressive Imaging [41.91463343106411]
Hyperspectral imaging is an essential imaging modality for a wide range of applications, especially in remote sensing, agriculture, and medicine.
Inspired by existing hyperspectral cameras that are either slow, expensive, or bulky, reconstructing hyperspectral images (HSIs) from a low-budget snapshot measurement has drawn wide attention.
Recent deep unfolding networks (DUNs) for spectral snapshot sensing (SCI) have achieved remarkable success.
In this paper, we propose a novel Hyperspectral Explicable Reconstruction and Optimal Sampling deep Network for SCI, dubbed HerosNet, which includes several phases under the ISTA-unfolding framework.
arXiv Detail & Related papers (2021-12-12T13:42:49Z) - Conditional Sequential Modulation for Efficient Global Image Retouching [45.99310982782054]
Photo retouching aims at enhancing the aesthetic visual quality of images that suffer from photographic defects such as over/under exposure, poor contrast, inharmonious saturation.
In this paper, we investigate some commonly-used retouching operations and mathematically find that these pixel-independent operations can be approximated or formulated by multi-layer perceptrons (MLPs)
We propose an extremely light-weight framework - Sequential Retouching Network (CSRNet) - for efficient global image retouching.
arXiv Detail & Related papers (2020-09-22T08:32:04Z) - Real Image Super Resolution Via Heterogeneous Model Ensemble using
GP-NAS [63.48801313087118]
We propose a new method for image superresolution using deep residual network with dense skip connections.
The proposed method won the first place in all three tracks of the AIM 2020 Real Image Super-Resolution Challenge.
arXiv Detail & Related papers (2020-09-02T22:33:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.