On the sampling complexity of coherent superpositions
- URL: http://arxiv.org/abs/2501.17071v1
- Date: Tue, 28 Jan 2025 16:56:49 GMT
- Title: On the sampling complexity of coherent superpositions
- Authors: Beatriz Dias, Robert Koenig,
- Abstract summary: We consider the problem of sampling from the distribution of measurement outcomes when applying a POVM to a superposition.
We give an algorithm which $-$ given $O(chi |c|2 log1/delta)$ such samples and calls to oracles to evaluate the probability density functions.
- Score: 0.4972323953932129
- License:
- Abstract: We consider the problem of sampling from the distribution of measurement outcomes when applying a POVM to a superposition $|\Psi\rangle = \sum_{j=0}^{\chi-1} c_j |\psi_j\rangle$ of $\chi$ pure states. We relate this problem to that of drawing samples from the outcome distribution when measuring a single state $|\psi_j\rangle$ in the superposition. Here $j$ is drawn from the distribution $p(j)=|c_j|^2/\|c\|^2_2$ of normalized amplitudes. We give an algorithm which $-$ given $O(\chi \|c\|_2^2 \log1/\delta)$ such samples and calls to oracles evaluating the involved probability density functions $-$ outputs a sample from the target distribution except with probability at most $\delta$. In many cases of interest, the POVM and individual states in the superposition have efficient classical descriptions allowing to evaluate matrix elements of POVM elements and to draw samples from outcome distributions. In such a scenario, our algorithm gives a reduction from strong classical simulation (i.e., the problem of computing outcome probabilities) to weak simulation (i.e., the problem of sampling). In contrast to prior work focusing on finite-outcome POVMs, this reduction also applies to continuous-outcome POVMs. An example is homodyne or heterodyne measurements applied to a superposition of Gaussian states. Here we obtain a sampling algorithm with time complexity $O(N^3 \chi^3 \|c\|_2^2 \log1/\delta)$ for a state of $N$ bosonic modes.
Related papers
- On the query complexity of sampling from non-log-concave distributions [2.4253233571593547]
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)propto e-f(x)$, which does not necessarily satisfy good isoperimetric conditions.
We show that for a wide range of parameters, sampling is strictly easier than optimization by a super-exponential factor in the dimension $d$.
arXiv Detail & Related papers (2025-02-10T06:54:16Z) - Polynomial time sampling from log-smooth distributions in fixed dimension under semi-log-concavity of the forward diffusion with application to strongly dissipative distributions [9.48556659249574]
We provide a sampling algorithm with complexity in fixed dimension.
We prove that our algorithm achieves an expected $epsilon$ error in $KL$ divergence.
As an application, we derive an exponential complexity improvement for the problem of sampling from an $L$-log-smooth distribution.
arXiv Detail & Related papers (2024-12-31T17:51:39Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Replicable Clustering [57.19013971737493]
We propose algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their counterparts in a black-box manner.
We also provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.
arXiv Detail & Related papers (2023-02-20T23:29:43Z) - Optimal Sublinear Sampling of Spanning Trees and Determinantal Point
Processes via Average-Case Entropic Independence [3.9586758145580014]
We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions.
For a graph $G=(V, E)$, we show how to approximately sample uniformly random spanning trees from $G$ in $widetildeO(lvert Vrvert)$ time per sample.
For a determinantal point process on subsets of size $k$ of a ground set of $n$ elements, we show how to approximately sample in $widetildeO(komega)$ time after an initial $widetildeO(nk
arXiv Detail & Related papers (2022-04-06T04:11:26Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Efficient sampling from the Bingham distribution [38.50073658077009]
We give an exact sampling algorithm from the Bingham distribution $p(x)propto exp(xtop A x)$ on the sphere $mathcal Sd-1$ with expected runtime of $operatornamepoly(d, lambda_max(A)-lambda_min(A))$.
As a direct application, we use this to sample from the posterior distribution of a rank-1 matrix inference problem in time.
arXiv Detail & Related papers (2020-09-30T22:48:03Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z) - Learning and Sampling of Atomic Interventions from Observations [11.522442415989818]
We study the problem of efficiently estimating the effect of an intervention on a single variable (atomic interventions) using observational samples in a causal Bayesian network.
Our goal is to give algorithms that are efficient in both time and sample complexity in a non-parametric setting.
arXiv Detail & Related papers (2020-02-11T07:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.