Estimating the Percentage of GBS Advantage in Gaussian Expectation Problems
- URL: http://arxiv.org/abs/2502.19362v2
- Date: Thu, 27 Feb 2025 08:15:47 GMT
- Title: Estimating the Percentage of GBS Advantage in Gaussian Expectation Problems
- Authors: Jørgen Ellegaard Andersen, Shan Shan,
- Abstract summary: We introduce algorithms that use GBS samples to approximate Gaussian expectation problems.<n>We find a non-empty subset of the problem space where these algorithms achieve exponential speedup over the standard Monte Carlo (MC) method.<n>For certain special cases, our methods consistently outperform MC across nearly 100% of the problem space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Boson Sampling (GBS), which can be realized with a photonic quantum computing model, perform some special kind of sampling tasks. In [4], we introduced algorithms that use GBS samples to approximate Gaussian expectation problems. We found a non-empty open subset of the problem space where these algorithms achieve exponential speedup over the standard Monte Carlo (MC) method. This speedup is defined in terms of the guaranteed sample size to reach the same accuracy $\epsilon$ and success probability $\delta$ under the $(\epsilon, \delta)$ multiplicative error approximation scheme. In this paper, we enhance our original approach by optimizing the average photon number in the GBS distribution to match the specific Gaussian expectation problem. We provide updated estimates of the guaranteed sample size for these improved algorithms and quantify the proportion of problem space where they outperform MC. Numerical results indicate that the proportion of the problem space where our improved algorithms have an advantage is substantial, and the advantage gained is significant. Notably, for certain special cases, our methods consistently outperform MC across nearly 100\% of the problem space.
Related papers
- Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems [0.0]
We show that two estimators using GBS samples can bring an exponential speedup over the plain Monte Carlo (MC) estimator.<n> Precisely speaking, the exponential speedup is defined in terms of the guaranteed sample size for these estimators to reach the same level of accuracy.
arXiv Detail & Related papers (2025-02-26T17:30:49Z) - Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner [6.408773096179187]
We propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models.
The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners.
arXiv Detail & Related papers (2024-08-01T00:19:36Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Quantum Sub-Gaussian Mean Estimator [0.0]
We present a new quantum algorithm for estimating the mean of a real-valued random variable.
Our estimator achieves a nearly-optimal quadratic speedup over the number of classical i.i.d. samples.
arXiv Detail & Related papers (2021-08-27T08:34:26Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - On Thompson Sampling with Langevin Algorithms [106.78254564840844]
Thompson sampling for multi-armed bandit problems enjoys favorable performance in both theory and practice.
It suffers from a significant limitation computationally, arising from the need for samples from posterior distributions at every iteration.
We propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling to address this issue.
arXiv Detail & Related papers (2020-02-23T22:35:29Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.