Efficient Bayesian phase estimation using mixed priors
- URL: http://arxiv.org/abs/2007.11629v2
- Date: Sun, 30 May 2021 19:56:32 GMT
- Title: Efficient Bayesian phase estimation using mixed priors
- Authors: Ewout van den Berg
- Abstract summary: We describe an efficient implementation of Bayesian quantum phase estimation in the presence of noise and multiple eigenstates.
The main contribution of this work is the dynamic switching between different representations of the phase distributions.
- Score: 1.0587959762260986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe an efficient implementation of Bayesian quantum phase estimation
in the presence of noise and multiple eigenstates. The main contribution of
this work is the dynamic switching between different representations of the
phase distributions, namely truncated Fourier series and normal distributions.
The Fourier-series representation has the advantage of being exact in many
cases, but suffers from increasing complexity with each update of the prior.
This necessitates truncation of the series, which eventually causes the
distribution to become unstable. We derive bounds on the error in representing
normal distributions with a truncated Fourier series, and use these to decide
when to switch to the normal-distribution representation. This representation
is much simpler, and was proposed in conjunction with rejection filtering for
approximate Bayesian updates. We show that, in many cases, the update can be
done exactly using analytic expressions, thereby greatly reducing the time
complexity of the updates. Finally, when dealing with a superposition of
several eigenstates, we need to estimate the relative weights. This can be
formulated as a convex optimization problem, which we solve using a
gradient-projection algorithm. By updating the weights at exponentially scaled
iterations we greatly reduce the computational complexity without affecting the
overall accuracy.
Related papers
- Convergence Rate of the Last Iterate of Stochastic Proximal Algorithms [8.636513507553504]
We analyze two classical algorithms for solving additively composite convex optimization problems.<n>We focus on the bounded variance assumption that is common, yet stringent, for getting last iterate convergence rates.<n>Our results apply directly to graph-guided regularizers that arise in multi-task and federated learning, where the regularizer decomposes as a sum over edges of a collaboration graph.
arXiv Detail & Related papers (2026-02-05T09:50:06Z) - Fast Bayesian Updates via Harmonic Representations [8.201374511929538]
This paper introduces a novel, unifying framework for fast Bayesian updates by leveraging harmonic analysis.<n>We demonstrate that representing the prior and likelihood in a suitable basis transforms the Bayesian update rule into a spectral convolution.<n>To achieve computational feasibility, we introduce a spectral truncation scheme, which, for smooth functions, yields an exceptionally accurate finite-dimensional approximation.
arXiv Detail & Related papers (2025-11-10T11:28:33Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Stochastic Quantum Sampling for Non-Logconcave Distributions and
Estimating Partition Functions [13.16814860487575]
We present quantum algorithms for sampling from nonlogconcave probability distributions.
$f$ can be written as a finite sum $f(x):= frac1Nsum_k=1N f_k(x)$.
arXiv Detail & Related papers (2023-10-17T17:55:32Z) - Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - Fit Like You Sample: Sample-Efficient Generalized Score Matching from
Fast Mixing Diffusions [29.488555741982015]
We show a close connection between the mixing time of a broad class of Markov processes with generator $mathcalL$.
We adapt techniques to speed up Markov chains to construct better score-matching losses.
In particular, preconditioning' the diffusion can be translated to an appropriate preconditioning'' of the score loss.
arXiv Detail & Related papers (2023-06-15T17:58:42Z) - Stochastic optimal transport in Banach Spaces for regularized estimation
of multivariate quantiles [0.0]
We introduce a new algorithm for solving entropic optimal transport (EOT) between two absolutely continuous probability measures $mu$ and $nu$.
We study the almost sure convergence of our algorithm that takes its values in an infinite-dimensional Banach space.
arXiv Detail & Related papers (2023-02-02T10:02:01Z) - Projective Integral Updates for High-Dimensional Variational Inference [0.0]
Variational inference seeks to improve uncertainty in predictions by optimizing a simplified distribution over parameters to stand in for the full posterior.
This work introduces a fixed-point optimization for variational inference that is applicable when every feasible log density can be expressed as a linear combination of functions from a given basis.
A PyTorch implementation of QNVB allows for better control over model uncertainty during training than competing methods.
arXiv Detail & Related papers (2023-01-20T00:38:15Z) - Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
Convergence [69.65563161962245]
We consider a smooth and strongly convex objective function using a Newton method.
We show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage.
arXiv Detail & Related papers (2022-04-20T07:14:21Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.