Model-adapted Fourier sampling for generative compressed sensing
- URL: http://arxiv.org/abs/2310.04984v2
- Date: Sat, 18 Nov 2023 01:06:57 GMT
- Title: Model-adapted Fourier sampling for generative compressed sensing
- Authors: Aaron Berk, Simone Brugiapaglia, Yaniv Plan, Matthew Scott, Xia Sheng,
Ozgur Yilmaz
- Abstract summary: We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix.
We construct a model-adapted sampling strategy with an improved sample complexity of $textitO(kd| boldsymbolalpha|_22)$ measurements.
- Score: 7.130302992490975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study generative compressed sensing when the measurement matrix is
randomly subsampled from a unitary matrix (with the DFT as an important special
case). It was recently shown that $\textit{O}(kdn\|
\boldsymbol{\alpha}\|_{\infty}^{2})$ uniformly random Fourier measurements are
sufficient to recover signals in the range of a neural network $G:\mathbb{R}^k
\to \mathbb{R}^n$ of depth $d$, where each component of the so-called local
coherence vector $\boldsymbol{\alpha}$ quantifies the alignment of a
corresponding Fourier vector with the range of $G$. We construct a
model-adapted sampling strategy with an improved sample complexity of
$\textit{O}(kd\| \boldsymbol{\alpha}\|_{2}^{2})$ measurements. This is enabled
by: (1) new theoretical recovery guarantees that we develop for nonuniformly
random sampling distributions and then (2) optimizing the sampling distribution
to minimize the number of measurements needed for these guarantees. This
development offers a sample complexity applicable to natural signal classes,
which are often almost maximally coherent with low Fourier frequencies.
Finally, we consider a surrogate sampling scheme, and validate its performance
in recovery experiments using the CelebA dataset.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Sample Complexity Bounds for Learning High-dimensional Simplices in
Noisy Regimes [5.526935605535376]
We find a sample complexity bound for learning a simplex from noisy samples.
We show that as long as $mathrmSNRgeOmegaleft(K1/2right)$, the sample complexity of the noisy regime has the same order to that of the noiseless case.
arXiv Detail & Related papers (2022-09-09T23:35:25Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Sharp Analysis of Random Fourier Features in Classification [9.383533125404755]
We show for the first time that random Fourier features classification can achieve $O(sqrtn)$ learning rate with only $Omega(sqrtn log n)$ features.
arXiv Detail & Related papers (2021-09-22T09:49:27Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.