Scalable Deep Compressive Sensing
- URL: http://arxiv.org/abs/2101.08024v2
- Date: Fri, 22 Jan 2021 02:53:03 GMT
- Title: Scalable Deep Compressive Sensing
- Authors: Zhonghao Zhang and Yipeng Liu and Xingyu Cao and Fei Wen and Ce Zhu
- Abstract summary: Most existing deep learning methods train different models for different subsampling ratios, which brings additional hardware burden.
We develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models.
Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods.
- Score: 43.92187349325869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has been used to image compressive sensing (CS) for enhanced
reconstruction performance. However, most existing deep learning methods train
different models for different subsampling ratios, which brings additional
hardware burden. In this paper, we develop a general framework named scalable
deep compressive sensing (SDCS) for the scalable sampling and reconstruction
(SSR) of all existing end-to-end-trained models. In the proposed way, images
are measured and initialized linearly. Two sampling masks are introduced to
flexibly control the subsampling ratios used in sampling and reconstruction,
respectively. To make the reconstruction model adapt to any subsampling ratio,
a training strategy dubbed scalable training is developed. In scalable
training, the model is trained with the sampling matrix and the initialization
matrix at various subsampling ratios by integrating different sampling matrix
masks. Experimental results show that models with SDCS can achieve SSR without
changing their structure while maintaining good performance, and SDCS
outperforms other SSR methods.
Related papers
- Fine Structure-Aware Sampling: A New Sampling Training Scheme for Pixel-Aligned Implicit Models in Single-View Human Reconstruction [98.30014795224432]
We introduce Fine Structured-Aware Sampling (FSS) to train pixel-aligned implicit models for single-view human reconstruction.
FSS proactively adapts to the thickness and complexity of surfaces.
It also proposes a mesh thickness loss signal for pixel-aligned implicit models.
arXiv Detail & Related papers (2024-02-29T14:26:46Z) - SAM-DiffSR: Structure-Modulated Diffusion Model for Image
Super-Resolution [49.205865715776106]
We propose the SAM-DiffSR model, which can utilize the fine-grained structure information from SAM in the process of sampling noise to improve the image quality without additional computational cost during inference.
Experimental results demonstrate the effectiveness of our proposed method, showcasing superior performance in suppressing artifacts, and surpassing existing diffusion-based methods by 0.74 dB at the maximum in terms of PSNR on DIV2K dataset.
arXiv Detail & Related papers (2024-02-27T01:57:02Z) - MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated
Convolution for Image Compressive Sensing (CS) [0.0]
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods.
We develop an interpretable and concise neural network model for reconstructing natural images using CS.
The model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods.
arXiv Detail & Related papers (2024-01-05T16:25:58Z) - Improving the Stability and Efficiency of Diffusion Models for Content Consistent Super-Resolution [18.71638301931374]
generative priors of pre-trained latent diffusion models (DMs) have demonstrated great potential to enhance the visual quality of image super-resolution (SR) results.
We propose to partition the generative SR process into two stages, where the DM is employed for reconstructing image structures and the GAN is employed for improving fine-grained details.
Once trained, our proposed method, namely content consistent super-resolution (CCSR),allows flexible use of different diffusion steps in the inference stage without re-training.
arXiv Detail & Related papers (2023-12-30T10:22:59Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - A theoretical framework for self-supervised MR image reconstruction
using sub-sampling via variable density Noisier2Noise [0.0]
We use the Noisier2Noise framework to analytically explain the performance of Self-samplingd Learning via Data UnderSupervise.
We propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask.
arXiv Detail & Related papers (2022-05-20T16:19:23Z) - A Unifying Multi-sampling-ratio CS-MRI Framework With Two-grid-cycle
Correction and Geometric Prior Distillation [7.643154460109723]
We propose a unifying deep unfolding multi-sampling-ratio CS-MRI framework, by merging advantages of model-based and deep learning-based methods.
Inspired by multigrid algorithm, we first embed the CS-MRI-based optimization algorithm into correction-distillation scheme.
We employ a condition module to learn adaptively step-length and noise level from compressive sampling ratio in every stage.
arXiv Detail & Related papers (2022-05-14T13:36:27Z) - Flexible Style Image Super-Resolution using Conditional Objective [11.830754741007029]
We present a more efficient method to train a single adjustable SR model on various combinations of losses by taking advantage of multi-task learning.
Specifically, we optimize an SR model with a conditional objective during training, where the objective is a weighted sum of multiple perceptual losses at different feature levels.
At the inference phase, our trained model can generate locally different outputs conditioned on the style control map.
arXiv Detail & Related papers (2022-01-13T11:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.