Learning Generative Prior with Latent Space Sparsity Constraints
- URL: http://arxiv.org/abs/2105.11956v1
- Date: Tue, 25 May 2021 14:12:04 GMT
- Title: Learning Generative Prior with Latent Space Sparsity Constraints
- Authors: Vinayak Killedar, Praveen Kumar Pokala, Chandra Sekhar Seelamantula
- Abstract summary: It has been argued that the distribution of natural images do not lie in a single manifold but rather lie in a union of several submanifolds.
We propose a sparsity-driven latent space sampling (SDLSS) framework and develop a proximal meta-learning (PML) algorithm to enforce sparsity in the latent space.
The results demonstrate that for a higher degree of compression, the SDLSS method is more efficient than the state-of-the-art method.
- Score: 25.213673771175692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of compressed sensing using a deep generative prior
model and consider both linear and learned nonlinear sensing mechanisms, where
the nonlinear one involves either a fully connected neural network or a
convolutional neural network. Recently, it has been argued that the
distribution of natural images do not lie in a single manifold but rather lie
in a union of several submanifolds. We propose a sparsity-driven latent space
sampling (SDLSS) framework and develop a proximal meta-learning (PML) algorithm
to enforce sparsity in the latent space. SDLSS allows the range-space of the
generator to be considered as a union-of-submanifolds. We also derive the
sample complexity bounds within the SDLSS framework for the linear measurement
model. The results demonstrate that for a higher degree of compression, the
SDLSS method is more efficient than the state-of-the-art method. We first
consider a comparison between linear and nonlinear sensing mechanisms on
Fashion-MNIST dataset and show that the learned nonlinear version is superior
to the linear one. Subsequent comparisons with the deep compressive sensing
(DCS) framework proposed in the literature are reported. We also consider the
effect of the dimension of the latent space and the sparsity factor in
validating the SDLSS framework. Performance quantification is carried out by
employing three objective metrics: peak signal-to-noise ratio (PSNR),
structural similarity index metric (SSIM), and reconstruction error (RE).
Related papers
- Subspace Representation Learning for Sparse Linear Arrays to Localize More Sources than Sensors: A Deep Learning Methodology [19.100476521802243]
We develop a novel methodology that estimates the co-array subspaces from a sample covariance for sparse linear array (SLA)
To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace.
The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy.
arXiv Detail & Related papers (2024-08-29T15:14:52Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - GeONet: a neural operator for learning the Wasserstein geodesic [13.468026138183623]
We present GeONet, a mesh-invariant deep neural operator network that learns the non-linear mapping from the input pair of initial and terminal distributions to the Wasserstein geodesic connecting the two endpoint distributions.
We demonstrate that GeONet achieves comparable testing accuracy to the standard OT solvers on simulation examples and the MNIST dataset with considerably reduced inference-stage computational cost by orders of magnitude.
arXiv Detail & Related papers (2022-09-28T21:55:40Z) - Orthogonal Matrix Retrieval with Spatial Consensus for 3D Unknown-View
Tomography [58.60249163402822]
Unknown-view tomography (UVT) reconstructs a 3D density map from its 2D projections at unknown, random orientations.
The proposed OMR is more robust and performs significantly better than the previous state-of-the-art OMR approach.
arXiv Detail & Related papers (2022-07-06T21:40:59Z) - Accurate Discharge Coefficient Prediction of Streamlined Weirs by
Coupling Linear Regression and Deep Convolutional Gated Recurrent Unit [2.4475596711637433]
The present study proposes data-driven modeling techniques, as an alternative to CFD simulation, to predict the discharge coefficient based on an experimental dataset.
It is found that the proposed three layer hierarchical DL algorithm consists of a convolutional layer coupled with two subsequent GRU levels, which is also hybridized with the LR method, leads to lower error metrics.
arXiv Detail & Related papers (2022-04-12T01:59:36Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Joint Characterization of Spatiotemporal Data Manifolds [0.0]
Dimensionality reduction (DR) is a type of characterization designed to mitigate the "curse of dimensionality" on high-D signals.
Recent years have seen the additional development of a suite of nonlinear DR algorithms, frequently categorized as "manifold learning"
Here, we show these three DR approaches can yield complementary information about ST manifold topology.
arXiv Detail & Related papers (2021-08-21T16:42:22Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.