Sampling-Priors-Augmented Deep Unfolding Network for Robust Video
Compressive Sensing
- URL: http://arxiv.org/abs/2307.07291v1
- Date: Fri, 14 Jul 2023 12:05:14 GMT
- Title: Sampling-Priors-Augmented Deep Unfolding Network for Robust Video
Compressive Sensing
- Authors: Yuhao Huang, Gangrong Qu and Youran Ge
- Abstract summary: We propose a Sampling-Priors-Augmented Deep Unfolding Network (SPA-DUN) for efficient and robust VCS reconstruction.
Under the optimization-inspired deep unfolding framework, a lightweight and efficient U-net is exploited to downsize the model.
Experiments on both simulation and real datasets demonstrate that SPA-DUN is not only applicable for various sampling settings with one single model but also achieves SOTA performance with incredible efficiency.
- Score: 1.7372440481022124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video Compressed Sensing (VCS) aims to reconstruct multiple frames from one
single captured measurement, thus achieving high-speed scene recording with a
low-frame-rate sensor. Although there have been impressive advances in VCS
recently, those state-of-the-art (SOTA) methods also significantly increase
model complexity and suffer from poor generality and robustness, which means
that those networks need to be retrained to accommodate the new system. Such
limitations hinder the real-time imaging and practical deployment of models. In
this work, we propose a Sampling-Priors-Augmented Deep Unfolding Network
(SPA-DUN) for efficient and robust VCS reconstruction. Under the
optimization-inspired deep unfolding framework, a lightweight and efficient
U-net is exploited to downsize the model while improving overall performance.
Moreover, the prior knowledge from the sampling model is utilized to
dynamically modulate the network features to enable single SPA-DUN to handle
arbitrary sampling settings, augmenting interpretability and generality.
Extensive experiments on both simulation and real datasets demonstrate that
SPA-DUN is not only applicable for various sampling settings with one single
model but also achieves SOTA performance with incredible efficiency.
Related papers
- A-SDM: Accelerating Stable Diffusion through Model Assembly and Feature Inheritance Strategies [51.7643024367548]
Stable Diffusion Model is a prevalent and effective model for text-to-image (T2I) and image-to-image (I2I) generation.
This study focuses on reducing redundant computation in SDM and optimizing the model through both tuning and tuning-free methods.
arXiv Detail & Related papers (2024-05-31T21:47:05Z) - Real-Time Compressed Sensing for Joint Hyperspectral Image Transmission and Restoration for CubeSat [9.981107535103687]
We propose a Real-Time Compressed Sensing network designed to be lightweight and require only relatively few training samples.
The RTCS network features a simplified architecture that reduces the required training samples and allows for easy implementation on integer-8-based encoders.
Our encoder employs an integer-8-compatible linear projection for stripe-like HSI data transmission, ensuring real-time compressed sensing.
arXiv Detail & Related papers (2024-04-24T10:03:37Z) - A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Towards a Unified Approach to Single Image Deraining and Dehazing [16.383099109400156]
We develop a new physical model for the rain effect and show that the well-known atmosphere scattering model (ASM) for the haze effect naturally emerges as its homogeneous continuous limit.
We also propose a Densely Scale-Connected Attentive Network (DSCAN) that is suitable for both deraining and dehazing tasks.
arXiv Detail & Related papers (2021-03-26T01:35:43Z) - Journey Towards Tiny Perceptual Super-Resolution [23.30464519074935]
We propose a neural architecture search (NAS) approach that integrates NAS and generative adversarial networks (GANs) with recent advances in perceptual SR.
Our tiny perceptual SR (TPSR) models outperform SRGAN and EnhanceNet on both full-reference perceptual metric (LPIPS) and distortion metric (PSNR)
arXiv Detail & Related papers (2020-07-08T18:24:40Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z) - A Generative Learning Approach for Spatio-temporal Modeling in Connected
Vehicular Network [55.852401381113786]
This paper proposes LaMI (Latency Model Inpainting), a novel framework to generate a comprehensive-temporal quality framework for wireless access latency of connected vehicles.
LaMI adopts the idea from image inpainting and synthesizing and can reconstruct the missing latency samples by a two-step procedure.
In particular, it first discovers the spatial correlation between samples collected in various regions using a patching-based approach and then feeds the original and highly correlated samples into a Varienational Autocoder (VAE)
arXiv Detail & Related papers (2020-03-16T03:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.