Bayesian quantum phase estimation with fixed photon states
- URL: http://arxiv.org/abs/2308.01293v1
- Date: Wed, 2 Aug 2023 17:26:10 GMT
- Title: Bayesian quantum phase estimation with fixed photon states
- Authors: Boyu Zhou, Saikat Guha, Christos N. Gagatsos
- Abstract summary: We consider the generic form of a two-mode bosonic state $|Psi_nrangle$ with finite Fock expansion and fixed mean photon number.
We study the form of the optimal input state, i.e., the form of the state's Fock coefficients.
- Score: 4.928739385940871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the generic form of a two-mode bosonic state $|\Psi_n\rangle$
with finite Fock expansion and fixed mean photon number to an integer $n\geq1$.
The upper and lower modes of the input state $|\Psi_n\rangle$ pick up a phase
$\phi$ and $-\phi$ respectively and we study the form of the optimal input
state, i.e., the form of the state's Fock coefficients, such that the mean
square error (MSE) for estimating $\phi$ is minimized while the MSE is always
attainable by a measurement. Our setting is Bayesian, meaning that we consider
$\phi$ as a random variable that follows a prior probability distribution
function (PDF). For the celebrated NOON state (equal superposition of
$|n0\rangle$ and $|0n\rangle$), which is a special case of the input state we
consider, and for a flat prior PDF we find that the Heisenberg scaling is lost
and the attainable minimum mean square error (MMSE) is found to be
$\pi^2/3-1/4n^2$, which is a manifestation of the fundamental difference
between the Fisherian and Bayesian approaches. Then, our numerical analysis
provides the optimal form of the generic input state for fixed values of $n$
and we provide evidence that a state $|\Psi_{\tau}\rangle$ produced by mixing a
Fock state with vacuum in a beam-splitter of transmissivity $\tau$ (i.e. a
special case of the state $|\Psi_n\rangle$), must correspond to $\tau=0.5$.
Finally, we consider an example of an adaptive technique: We consider a state
of the form of $|\Psi_n\rangle$ for $n=1$. We start with a flat prior PDF, and
for each subsequent step we use as prior PDF the posterior probability of the
previous step, while for each step we update the optimal state and optimal
measurement. We show our analysis for up to five steps, but one can allow the
algorithm to run further. Finally, we conjecture the form the of the prior PDF
and the optimal state for the infinite step and we calculate the corresponding
MMSE.
Related papers
- Fast Convergence for High-Order ODE Solvers in Diffusion Probabilistic Models [5.939858158928473]
Diffusion probabilistic models generate samples by learning to reverse a noise-injection process that transforms data into noise.<n>Reformulating this reverse process as a deterministic probability flow ordinary differential equation (ODE) enables efficient sampling using high-order solvers.<n>Since the score function is typically approximated by a neural network, analyzing the interaction between its regularity, approximation error, and numerical integration error is key to understanding the overall sampling accuracy.
arXiv Detail & Related papers (2025-06-16T03:09:25Z) - Adaptive stable distribution and Hurst exponent by method of moments moving estimator for nonstationary time series [0.49728186750345144]
We will focus on novel more agnostic approach: moving estimator, which estimates parameters separately for every time.<n>We will show its applications for alpha-Stable distribution, which also influences Hurst exponent, hence can be used for its adaptive estimation.
arXiv Detail & Related papers (2025-05-20T08:36:49Z) - From Continual Learning to SGD and Back: Better Rates for Continual Linear Models [50.11453013647086]
We analyze the forgetting, i.e., loss on previously seen tasks, after $k$ iterations.<n>We develop novel last-iterate upper bounds in the realizable least squares setup.<n>We prove for the first time that randomization alone, with no task repetition, can prevent catastrophic in sufficiently long task sequences.
arXiv Detail & Related papers (2025-04-06T18:39:45Z) - Active Subsampling for Measurement-Constrained M-Estimation of Individualized Thresholds with High-Dimensional Data [3.1138411427556445]
In the measurement-constrained problems, despite the availability of large datasets, we may be only affordable to observe the labels on a small portion of the large dataset.
This poses a critical question that which data points are most beneficial to label given a budget constraint.
In this paper, we focus on the estimation of the optimal individualized threshold in a measurement-constrained M-estimation framework.
arXiv Detail & Related papers (2024-11-21T00:21:17Z) - Variance Reduction for the Independent Metropolis Sampler [11.074080383657453]
We prove that if $pi$ is close enough under KL divergence to another density $q$, an independent sampler that obtains samples from $pi$ achieves smaller variance than i.i.d. sampling from $pi$.
We propose an adaptive independent Metropolis algorithm that adapts the proposal density such that its KL divergence with the target is being reduced.
arXiv Detail & Related papers (2024-06-25T16:38:53Z) - Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions [11.222970035173372]
kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1 t-fracd+22(tfracd2 vee 1)right)
We show that a kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1/2 t-fracd4right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian
arXiv Detail & Related papers (2024-02-23T20:51:31Z) - Towards Optimal Statistical Watermarking [95.46650092476372]
We study statistical watermarking by formulating it as a hypothesis testing problem.
Key to our formulation is a coupling of the output tokens and the rejection region.
We characterize the Uniformly Most Powerful (UMP) watermark in the general hypothesis testing setting.
arXiv Detail & Related papers (2023-12-13T06:57:00Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Bayesian minimum mean square error for transmissivity sensing [5.348876409230946]
We address the problem of estimating the transmissivity of the pure-loss channel from the Bayesian point of view.
We employ methods to compute the Bayesian minimum mean square error (MMSE)
We study the performance of photon-counting, which is a sub-optimal yet practical measurement.
arXiv Detail & Related papers (2023-04-12T00:01:28Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - (Nearly) Optimal Private Linear Regression via Adaptive Clipping [22.639650869444395]
We study the problem of differentially private linear regression where each data point is sampled from a fixed sub-Gaussian style distribution.
We propose and analyze a one-pass mini-batch gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement.
arXiv Detail & Related papers (2022-07-11T08:04:46Z) - Conditionally Calibrated Predictive Distributions by
Probability-Probability Map: Application to Galaxy Redshift Estimation and
Probabilistic Forecasting [4.186140302617659]
Uncertainty is crucial for assessing the predictive ability of AI algorithms.
We propose textttCal-PIT, a method that addresses both PD diagnostics and recalibration.
We benchmark our corrected prediction bands against oracle bands and state-of-the-art predictive inference algorithms.
arXiv Detail & Related papers (2022-05-29T03:52:44Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Debiasing Distributed Second Order Optimization with Surrogate Sketching
and Scaled Regularization [101.5159744660701]
In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data.
Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods.
arXiv Detail & Related papers (2020-07-02T18:08:14Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.