Efficient approximation of experimental Gaussian boson sampling
- URL: http://arxiv.org/abs/2109.11525v2
- Date: Wed, 2 Feb 2022 01:45:40 GMT
- Title: Efficient approximation of experimental Gaussian boson sampling
- Authors: Benjamin Villalonga, Murphy Yuezhen Niu, Li Li, Hartmut Neven, John C.
Platt, Vadim N. Smelyanskiy, and Sergio Boixo
- Abstract summary: Two recent landmark experiments have performed Gaussian boson sampling (GBS) with a non-programmable linear interferometer and threshold detectors on up to 144 output modes.
Here we give classical sampling algorithms with better total variation distance and Kullback-Leibler divergence than these experiments.
- Score: 2.805766654291013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two recent landmark experiments have performed Gaussian boson sampling (GBS)
with a non-programmable linear interferometer and threshold detectors on up to
144 output modes (see Refs.~\onlinecite{zhong_quantum_2020,zhong2021phase}).
Here we give classical sampling algorithms with better total variation distance
and Kullback-Leibler divergence than these experiments and a computational cost
quadratic in the number of modes. Our method samples from a distribution that
approximates the single-mode and two-mode ideal marginals of the given Gaussian
boson sampler, which are calculated efficiently. One implementation sets the
parameters of a Boltzmann machine from the calculated marginals using a mean
field solution. This is a 2nd order approximation, with the uniform and thermal
approximations corresponding to the 0th and 1st order, respectively. The $k$th
order approximation reproduces Ursell functions (also known as connected
correlations) up to order $k$ with a cost exponential in $k$ and high
precision, while the experiment exhibits higher order Ursell functions with
lower precision. This methodology, like other polynomial approximations
introduced previously, does not apply to random circuit sampling because the
$k$th order approximation would simply result in the uniform distribution, in
contrast to GBS.
Related papers
- Faster Diffusion Sampling with Randomized Midpoints: Sequential and Parallel [10.840582511203024]
We show that our algorithm can be parallelized to run in only $widetilde O(log2 d)$ parallel rounds.
We also show that our algorithm can be parallelized to run in only $widetilde O(log2 d)$ parallel rounds.
arXiv Detail & Related papers (2024-06-03T01:34:34Z) - Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient [40.22217106270146]
Variance reduction techniques are designed to decrease the sampling variance, thereby accelerating convergence rates of first-order (FO) and zeroth-order (ZO) optimization methods.
In composite optimization problems, ZO methods encounter an additional variance called the coordinate-wise variance, which stems from the random estimation.
This paper proposes the Zeroth-order Proximal Double Variance Reduction (ZPDVR) method, which utilizes the averaging trick to reduce both sampling and coordinate-wise variances.
arXiv Detail & Related papers (2024-05-28T02:27:53Z) - Faster Sampling via Stochastic Gradient Proximal Sampler [28.422547264326468]
Proximal samplers (SPS) for sampling from non-log-concave distributions are studied.
We show that the convergence to the target distribution can be guaranteed as long as the algorithm trajectory is bounded.
We provide two implementable variants based on Langevin dynamics (SGLD) and Langevin-MALA, giving rise to SPS-SGLD and SPS-MALA.
arXiv Detail & Related papers (2024-05-27T00:53:18Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Maximizing the Validity of the Gaussian Approximation for the biphoton
State from Parametric Downconversion [0.0]
We present a choice of $alpha$ which maximizes the validity of the Gaussian approximation.
We also discuss the so-called textitsuper-Gaussian and textitcosine-Gaussian approximations as practical alternatives.
arXiv Detail & Related papers (2022-10-05T15:47:04Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.