On classical simulation algorithms for noisy Boson Sampling
- URL: http://arxiv.org/abs/2301.11532v1
- Date: Fri, 27 Jan 2023 04:42:59 GMT
- Title: On classical simulation algorithms for noisy Boson Sampling
- Authors: Changhun Oh, Liang Jiang, Bill Fefferman
- Abstract summary: We present a classical algorithm that approximately samples from the output distribution of certain noisy Boson Sampling experiments.
This is inspired by a recent result of Aharonov, Gao, Landau, Liu and Vazirani and makes use of an observation originally due to Kalai Kindler.
- Score: 2.8655318786364408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a classical algorithm that approximately samples from the output
distribution of certain noisy Boson Sampling experiments. This algorithm is
inspired by a recent result of Aharonov, Gao, Landau, Liu and Vazirani and
makes use of an observation originally due to Kalai and Kindler that the output
probability of Boson Sampling experiments with a Gaussian noise model can be
approximated by sparse low-degree polynomials. This observation alone does not
suffice for classical sampling, because its marginal probabilities might not be
approximated by sparse low-degree polynomials, and furthermore, the
approximated probabilities might be negative. We solve this problem by
employing the first quantization representation to give an algorithm for
computing the marginal probabilities of these experiments. We prove that when
the overall noise rate is constant, the algorithm runs in time quasi-polynomial
in the number of input photons $N$ and accuracy. When the overall noise rate
scales as $1-x_1^\gamma$ for constant $x_1$ and $\gamma=\Omega(\log N)$, the
running time becomes polynomial. Furthermore, we study noisy Boson Sampling
with practically relevant noise models such as partial distinguishability and
photon loss. We show that the same technique does not immediately apply in
these settings, leaving open the possibility of a scalable demonstration of
noisy quantum advantage for these noise models in certain parameter regimes.
Related papers
- Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Gaussian Boson Sampling to Accelerate NP-Complete Vertex-Minor Graph
Classification [0.9935277311162707]
We propose a hybrid quantum-classical algorithm for the NP-complete problem of determining if two graphs are minor of one another.
We find a graph embedding that allows trading between the one-shot classification accuracy and the amount of input squeezing.
We introduce a new classical algorithm based on graph spectra, which we show outperforms various well-known graph-similarity algorithms.
arXiv Detail & Related papers (2024-02-05T21:24:11Z) - Classical algorithm for simulating experimental Gaussian boson sampling [2.1684911254068906]
Gaussian boson sampling is a promising candidate for showing experimental quantum advantage.
Despite a high photon loss rate and the presence of noise, they are currently claimed to be hard to classically simulate with the best-known classical algorithm.
We present a classical tensor-network algorithm that simulates Gaussian boson sampling and whose complexity can be significantly reduced when the photon loss rate is high.
arXiv Detail & Related papers (2023-06-06T14:19:48Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - PoGaIN: Poisson-Gaussian Image Noise Modeling from Paired Samples [9.22047303381213]
We derive a novel, cumulant-based, approach for Poisson-Gaussian noise modeling from paired image samples.
We show its improved performance over different baselines with special emphasis on MSE, effect of outliers, image dependence and bias.
arXiv Detail & Related papers (2022-10-10T17:34:49Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - SNIPS: Solving Noisy Inverse Problems Stochastically [25.567566997688044]
We introduce a novel algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem.
Our solution incorporates ideas from Langevin dynamics and Newton's method, and exploits a pre-trained minimum mean squared error (MMSE)
We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
arXiv Detail & Related papers (2021-05-31T13:33:21Z) - Quasiprobability decompositions with reduced sampling overhead [4.38301148531795]
Quantum error mitigation techniques can reduce noise on current quantum hardware without the need for fault-tolerant quantum error correction.
We present a new algorithm based on mathematical optimization that aims to choose the quasiprobability decomposition in a noise-aware manner.
arXiv Detail & Related papers (2021-01-22T19:00:06Z) - Learning based signal detection for MIMO systems with unknown noise
statistics [84.02122699723536]
This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics.
In practice, there is little or even no statistical knowledge on the system noise, which in many cases is non-Gaussian, impulsive and not analyzable.
Our framework is driven by an unsupervised learning approach, where only the noise samples are required.
arXiv Detail & Related papers (2021-01-21T04:48:15Z) - Analysis and Design of Thompson Sampling for Stochastic Partial
Monitoring [91.22679787578438]
We present a novel Thompson-sampling-based algorithm for partial monitoring.
We prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $mathrmO(log T)$ for a linearized variant of the problem with local observability.
arXiv Detail & Related papers (2020-06-17T05:48:33Z) - On Thompson Sampling with Langevin Algorithms [106.78254564840844]
Thompson sampling for multi-armed bandit problems enjoys favorable performance in both theory and practice.
It suffers from a significant limitation computationally, arising from the need for samples from posterior distributions at every iteration.
We propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling to address this issue.
arXiv Detail & Related papers (2020-02-23T22:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.