Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems
- URL: http://arxiv.org/abs/2502.19336v2
- Date: Thu, 27 Feb 2025 08:11:57 GMT
- Title: Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems
- Authors: Jørgen Ellegaard Andersen, Shan Shan,
- Abstract summary: We show that two estimators using GBS samples can bring an exponential speedup over the plain Monte Carlo (MC) estimator.<n> Precisely speaking, the exponential speedup is defined in terms of the guaranteed sample size for these estimators to reach the same level of accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Boson Sampling (GBS) have shown advantages over classical methods for performing some specific sampling tasks. To fully harness the computational power of GBS, there has been great interest in identifying their practical applications. In this study, we explore the use of GBS samples for computing a numerical approximation to the Gaussian expectation problem, that is to integrate a multivariate function against a Gaussian distribution. We propose two estimators using GBS samples, and show that they both can bring an exponential speedup over the plain Monte Carlo (MC) estimator. Precisely speaking, the exponential speedup is defined in terms of the guaranteed sample size for these estimators to reach the same level of accuracy $\epsilon$ and the same success probability $\delta$ in the $(\epsilon, \delta)$ multiplicative error approximation scheme. We prove that there is an open and nonempty subset of the Gaussian expectation problem space for such computational advantage.
Related papers
- Estimating the Percentage of GBS Advantage in Gaussian Expectation Problems [0.0]
We introduce algorithms that use GBS samples to approximate Gaussian expectation problems.<n>We find a non-empty subset of the problem space where these algorithms achieve exponential speedup over the standard Monte Carlo (MC) method.<n>For certain special cases, our methods consistently outperform MC across nearly 100% of the problem space.
arXiv Detail & Related papers (2025-02-26T18:00:44Z) - Rethinking Approximate Gaussian Inference in Classification [25.021782278452005]
In classification tasks, softmax functions are ubiquitously used to produce predictive probabilities.<n>We propose a simple change in the learning objective which allows the exact computation of predictives.<n>We evaluate our approach combined with several approximate Gaussian inference methods on large- and small-scale datasets.
arXiv Detail & Related papers (2025-02-05T17:03:49Z) - Bayesian Circular Regression with von Mises Quasi-Processes [57.88921637944379]
In this work we explore a family of expressive and interpretable distributions over circle-valued random functions.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Gibbs sampling.
We present experiments applying this model to the prediction of wind directions and the percentage of the running gait cycle as a function of joint angles.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Finite-Time Regret of Thompson Sampling Algorithms for Exponential
Family Multi-Armed Bandits [88.21288104408556]
We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family.
We propose a Thompson sampling, termed Expulli, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm.
arXiv Detail & Related papers (2022-06-07T18:08:21Z) - Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization [5.586361810914231]
Hypervolume improvement (HVI) is commonly employed in multi-objective Bayesian optimization algorithms.
We provide the exact expression of HVI's probability distribution for bi-objective problems.
We propose a novel acquisition function - $varepsilon$-PoHVI.
arXiv Detail & Related papers (2022-05-11T13:59:35Z) - Efficient approximation of experimental Gaussian boson sampling [2.805766654291013]
Two recent landmark experiments have performed Gaussian boson sampling (GBS) with a non-programmable linear interferometer and threshold detectors on up to 144 output modes.
Here we give classical sampling algorithms with better total variation distance and Kullback-Leibler divergence than these experiments.
arXiv Detail & Related papers (2021-09-23T17:47:06Z) - Revisiting the Sample Complexity of Sparse Spectrum Approximation of
Gaussian Processes [60.479499225746295]
We introduce a new scalable approximation for Gaussian processes with provable guarantees which hold simultaneously over its entire parameter space.
Our approximation is obtained from an improved sample complexity analysis for sparse spectrum Gaussian processes (SSGPs)
arXiv Detail & Related papers (2020-11-17T05:41:50Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Faster Wasserstein Distance Estimation with the Sinkhorn Divergence [0.0]
The squared Wasserstein distance is a quantity to compare probability distributions in a non-parametric setting.
In this work, we propose instead to estimate it with the Sinkhorn divergence.
We show that, for smooth densities, this estimator has a comparable sample complexity but allows higher regularization levels.
arXiv Detail & Related papers (2020-06-15T06:58:16Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.