Content-aware Scalable Deep Compressed Sensing
- URL: http://arxiv.org/abs/2207.09313v1
- Date: Tue, 19 Jul 2022 14:59:14 GMT
- Title: Content-aware Scalable Deep Compressed Sensing
- Authors: Bin Chen and Jian Zhang
- Abstract summary: We present a novel content-aware scalable network dubbed CASNet to address image compressed sensing problems.
We first adopt a data-driven saliency detector to evaluate the importances of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation.
To accelerate training convergence and improve network robustness, we propose an SVD-based scheme and a random transformation enhancement (RTE) strategy.
- Score: 8.865549833627794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To more efficiently address image compressed sensing (CS) problems, we
present a novel content-aware scalable network dubbed CASNet which collectively
achieves adaptive sampling rate allocation, fine granular scalability and
high-quality reconstruction. We first adopt a data-driven saliency detector to
evaluate the importances of different image regions and propose a
saliency-based block ratio aggregation (BRA) strategy for sampling rate
allocation. A unified learnable generating matrix is then developed to produce
sampling matrix of any CS ratio with an ordered structure. Being equipped with
the optimization-inspired recovery subnet guided by saliency information and a
multi-block training scheme preventing blocking artifacts, CASNet jointly
reconstructs the image blocks sampled at various sampling rates with one single
model. To accelerate training convergence and improve network robustness, we
propose an SVD-based initialization scheme and a random transformation
enhancement (RTE) strategy, which are extensible without introducing extra
parameters. All the CASNet components can be combined and learned end-to-end.
We further provide a four-stage implementation for evaluation and practical
deployments. Experiments demonstrate that CASNet outperforms other CS networks
by a large margin, validating the collaboration and mutual supports among its
components and strategies. Codes are available at
https://github.com/Guaishou74851/CASNet.
Related papers
- Deep Network for Image Compressed Sensing Coding Using Local Structural
Sampling [37.10939114542612]
We propose a new CNN based image CS coding framework using local structural sampling (dubbed CSCNet)
In the proposed framework, instead of GRM, a new local structural sampling matrix is first developed.
The measurements with high correlations are produced, which are then coded into final bitstreams by the third-party image.
arXiv Detail & Related papers (2024-02-29T12:43:28Z) - Early Fusion of Features for Semantic Segmentation [10.362589129094975]
This paper introduces a novel segmentation framework that integrates a classifier network with a reverse HRNet architecture for efficient image segmentation.
Our methodology is rigorously tested across several benchmark datasets including Mapillary Vistas, Cityscapes, CamVid, COCO, and PASCAL-VOC2012.
The results demonstrate the effectiveness of our proposed model in achieving high segmentation accuracy, indicating its potential for various applications in image analysis.
arXiv Detail & Related papers (2024-02-08T22:58:06Z) - MB-RACS: Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network [65.1004435124796]
We propose a Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network (MB-RACS) framework.
Our experiments demonstrate that the proposed MB-RACS method surpasses current leading methods.
arXiv Detail & Related papers (2024-01-19T04:40:20Z) - MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated
Convolution for Image Compressive Sensing (CS) [0.0]
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods.
We develop an interpretable and concise neural network model for reconstructing natural images using CS.
The model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods.
arXiv Detail & Related papers (2024-01-05T16:25:58Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery [11.119895959906085]
We propose a novel, structured pruning algorithm for neural networks -- the iterative, Sparse Structured Pruning, dubbed as i-SpaSP.
i-SpaSP operates by identifying a larger set of important parameter groups within a network that contribute most to the residual between pruned and dense network output.
It is shown to discover high-performing sub-networks and improve upon the pruning efficiency of provable baseline methodologies by several orders of magnitude.
arXiv Detail & Related papers (2021-12-07T05:26:45Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.