Classical models may be a better explanation of the Jiuzhang 1.0
Gaussian Boson Sampler than its targeted squeezed light model
- URL: http://arxiv.org/abs/2207.10058v6
- Date: Sun, 30 Jul 2023 03:44:08 GMT
- Title: Classical models may be a better explanation of the Jiuzhang 1.0
Gaussian Boson Sampler than its targeted squeezed light model
- Authors: Javier Mart\'inez-Cifuentes, K. M. Fonseca-Romero, Nicol\'as Quesada
- Abstract summary: We propose an alternative classical hypothesis for the validation of the Jiuzhang 1.0 and Jiuzhang 2.0 experiments.
Our results provide a new hypothesis that should be considered in the validation of future GBS experiments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Zhong et al. performed landmark Gaussian boson sampling experiments
with up to 144 modes using threshold detectors. The authors claim to have
achieved quantum computational advantage with the implementation of these
experiments, named Jiuzhang 1.0 and Jiuzhang 2.0. Their experimental results
are validated against several classical hypotheses and adversaries using tests
such as the comparison of statistical correlations between modes, Bayesian
hypothesis testing and the Heavy Output Generation (HOG) test. We propose an
alternative classical hypothesis for the validation of these experiments using
the probability distribution of mixtures of coherent states sent into a lossy
interferometer; these input mixed states, which we term squashed states, have
vacuum fluctuations in one quadrature and excess fluctuations in the other. We
find that for configurations in the high photon number density regime, the
comparison of statistical correlations does not tell apart the ground truth of
the experiment (two-mode squeezed states sent into an interferometer) from our
alternative hypothesis. The Bayesian test indicates that, for all
configurations excepting Jiuzhang 1.0, the ground truth is a more likely
explanation of the experimental data than our alternative hypothesis. A similar
result is obtained for the HOG test: for all configurations of Jiuzhang 2.0,
the test indicates that the experimental samples have higher ground truth
probability than the samples obtained form our alternative distribution; for
Jiuzhang 1.0 the test is inconclusive. Our results provide a new hypothesis
that should be considered in the validation of future GBS experiments, and shed
light into the need to identify proper metrics to verify quantum advantage in
the context of GBS. They also indicate that a classical explanation of the
Jiuzhang 1.0 experiment, lacking any quantum features, has not been ruled out.
Related papers
- Validation tests of Gaussian boson samplers with photon-number resolving detectors [44.99833362998488]
We apply phase-space simulation methods to partially verify recent experiments on Gaussian boson sampling (GBS) implementing photon-number resolving (PNR) detectors.
We show that the data as a whole shows discrepancies with theoretical predictions for perfect squeezing.
We suggest that such validation tests could form the basis of feedback methods to improve GBS quantum computer experiments.
arXiv Detail & Related papers (2024-11-18T01:41:22Z) - Gaussian boson sampling validation via detector binning [0.0]
We propose binned-detector probability distributions as a suitable quantity to statistically validate GBS experiments.
We show how to compute such distributions by leveraging their connection with their respective characteristic function.
We also illustrate how binned-detector probability distributions behave when Haar-averaged over all possible interferometric networks.
arXiv Detail & Related papers (2023-10-27T12:55:52Z) - Testing the postulates of quantum mechanics with coherent states of
light and homodyne detection [0.4221619479687067]
We perform the first test using coherent states of light in a three-arm interferometer combined with homodyne detection.
For testing Born's rule, we find that the third order interference is bounded to be $kappa$ = 0.002 $pm$ 0.004.
We also use our experiment to test Glauber's theory of optical coherence.
arXiv Detail & Related papers (2023-08-07T10:07:33Z) - Simulating Gaussian boson sampling quantum computers [68.8204255655161]
We briefly review recent theoretical methods to simulate experimental Gaussian boson sampling networks.
We focus mostly on methods that use phase-space representations of quantum mechanics.
A brief overview of the theory of GBS, recent experiments and other types of methods are also presented.
arXiv Detail & Related papers (2023-08-02T02:03:31Z) - Goodness of fit by Neyman-Pearson testing [1.000352363234953]
Neyman-Pearson strategy for hypothesis testing can be employed for goodness of fit if the alternative hypothesis is selected from data.
New Physics Learning Machine (NPLM) methodology has been developed to target the detection of new physical effects in the context of high energy physics collider experiments.
NPLM emerges as the more sensitive test to small departures of the data from the expected distribution and not biased towards detecting specific types of anomalies.
arXiv Detail & Related papers (2023-05-23T15:01:45Z) - Validation tests of GBS quantum computers give evidence for quantum
advantage with a decoherent target [62.997667081978825]
We use positive-P phase-space simulations of grouped count probabilities as a fingerprint for verifying multi-mode data.
We show how one can disprove faked data, and apply this to a classical count algorithm.
arXiv Detail & Related papers (2022-11-07T12:00:45Z) - With Little Power Comes Great Responsibility [54.96675741328462]
Underpowered experiments make it more difficult to discern the difference between statistical noise and meaningful model improvements.
Small test sets mean that most attempted comparisons to state of the art models will not be adequately powered.
For machine translation, we find that typical test sets of 2000 sentences have approximately 75% power to detect differences of 1 BLEU point.
arXiv Detail & Related papers (2020-10-13T18:00:02Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Estimating the number and effect sizes of non-null hypotheses [14.34147140416535]
Knowing the distribution of effect sizes allows us to calculate the power (type II error) of any experimental design.
Our estimator can be used to guarantee the number of discoveries that will be made using a given experimental design in a future experiment.
arXiv Detail & Related papers (2020-02-17T23:20:21Z) - On the Replicability of Combining Word Embeddings and Retrieval Models [71.18271398274513]
We replicate recent experiments attempting to demonstrate an attractive hypothesis about the use of the Fisher kernel framework.
Specifically, the hypothesis was that the use of a mixture model of von Mises-Fisher (VMF) distributions would be beneficial because of the focus on cosine distances of both VMF and the vector space model.
arXiv Detail & Related papers (2020-01-13T19:01:07Z) - Using Randomness to decide among Locality, Realism and Ergodicity [91.3755431537592]
An experiment is proposed to find out, or at least to get an indication about, which one is false.
The results of such experiment would be important not only to the foundations of Quantum Mechanics.
arXiv Detail & Related papers (2020-01-06T19:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.