Compressed Sensing of Generative Sparse-latent (GSL) Signals
- URL: http://arxiv.org/abs/2310.15119v1
- Date: Mon, 16 Oct 2023 12:49:33 GMT
- Title: Compressed Sensing of Generative Sparse-latent (GSL) Signals
- Authors: Antoine Honor\'e, Anubhab Ghosh, Saikat Chatterjee
- Abstract summary: We consider reconstruction of an ambient signal in a compressed sensing (CS) setup where the ambient signal has a neural network based generative model.
The ambient signal as generative signal generated a sparse-latent input.
- Score: 9.00058212634219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider reconstruction of an ambient signal in a compressed sensing (CS)
setup where the ambient signal has a neural network based generative model. The
generative model has a sparse-latent input and we refer to the generated
ambient signal as generative sparse-latent signal (GSL). The proposed sparsity
inducing reconstruction algorithm is inherently non-convex, and we show that a
gradient based search provides a good reconstruction performance. We evaluate
our proposed algorithm using simulated data.
Related papers
- One-bit Compressed Sensing using Generative Models [20.819739287436317]
This paper addresses the classical problem of one-bit compressed sensing using a deep learning-based reconstruction algorithm.
A pre-trained neural network learns to map from a low-dimensional latent space to a higher-dimensional set of sparse vectors.
The presented algorithm provides an excellent reconstruction performance because the generative model can learn additional structural information about the signal beyond sparsity.
arXiv Detail & Related papers (2025-02-18T11:28:35Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - JSRNN: Joint Sampling and Reconstruction Neural Networks for High
Quality Image Compressed Sensing [8.902545322578925]
Two sub-networks, which are the sampling sub-network and the reconstruction sub-network, are included in the proposed framework.
In the reconstruction sub-network, a cascade network combining stacked denoising autoencoder (SDA) and convolutional neural network (CNN) is designed to reconstruct signals.
This framework outperforms many other state-of-the-art methods, especially at low sampling rates.
arXiv Detail & Related papers (2022-11-11T02:20:30Z) - Semi-signed neural fitting for surface reconstruction from unoriented
point clouds [53.379712818791894]
We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
arXiv Detail & Related papers (2022-06-14T09:40:17Z) - Solving Inverse Problems with Conditional-GAN Prior via Fast
Network-Projected Gradient Descent [11.247580943940918]
In this work we investigate a network-based projected gradient descent (NPGD) algorithm for measurement-conditional generative models.
We show that the combination of measurement conditional model with NPGD works well in recovering the compressed signal while achieving similar or in some cases even better performance along with a much faster reconstruction.
arXiv Detail & Related papers (2021-09-02T17:28:05Z) - Orthogonal Features Based EEG Signals Denoising Using Fractional and
Compressed One-Dimensional CNN AutoEncoder [3.8580784887142774]
This paper presents a fractional one-dimensional convolutional neural network (CNN) autoencoder for denoising the Electroencephalogram (EEG) signals.
EEG signals often get contaminated with noise during the recording process, mostly due to muscle artifacts (MA)
arXiv Detail & Related papers (2021-04-16T13:58:05Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z) - Improving Stability of LS-GANs for Audio and Speech Signals [70.15099665710336]
We show that encoding departure from normality computed in this vector space into the generator optimization formulation helps to craft more comprehensive spectrograms.
We demonstrate the effectiveness of binding this metric for enhancing stability in training with less mode collapse compared to baseline GANs.
arXiv Detail & Related papers (2020-08-12T17:41:25Z) - When and How Can Deep Generative Models be Inverted? [28.83334026125828]
Deep generative models (GANs and VAEs) have been developed quite extensively in recent years.
We define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.) under which such generative models are invertible.
We show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
arXiv Detail & Related papers (2020-06-28T09:37:52Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.