Minimization of the estimation error for entanglement distribution
networks with arbitrary noise
- URL: http://arxiv.org/abs/2203.09921v2
- Date: Fri, 7 Oct 2022 00:22:31 GMT
- Title: Minimization of the estimation error for entanglement distribution
networks with arbitrary noise
- Authors: Liangzhong Ruan
- Abstract summary: We consider a setup in which nodes randomly sample a subset of the entangled qubit pairs to measure and then estimate the average fidelity of the unsampled pairs conditioned on the measurement outcome.
The proposed estimation protocol achieves the lowest mean squared estimation error in a difficult scenario with arbitrary noise and no prior information.
- Score: 1.3198689566654105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fidelity estimation is essential for the quality control of entanglement
distribution networks. Because measurements collapse quantum states, we
consider a setup in which nodes randomly sample a subset of the entangled qubit
pairs to measure and then estimate the average fidelity of the unsampled pairs
conditioned on the measurement outcome. The proposed estimation protocol
achieves the lowest mean squared estimation error in a difficult scenario with
arbitrary noise and no prior information. Moreover, this protocol is
implementation friendly because it only performs local Pauli operators
according to a predefined sequence. Numerical studies show that compared to
existing fidelity estimation protocols, the proposed protocol reduces the
estimation error in both scenarios with i.i.d. noise and correlated noise.
Related papers
- Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Robust Non-parametric Knowledge-based Diffusion Least Mean Squares over
Adaptive Networks [12.266804067030455]
The proposed algorithm leads to a robust estimation of an unknown parameter vector in a group of cooperative estimators.
Results show the robustness of the proposed algorithm in the presence of different noise types.
arXiv Detail & Related papers (2023-12-03T06:18:59Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Noise-resilient phase estimation with randomized compiling [0.0]
We develop an error mitigation method for the control-free phase estimation.
We prove a theorem that under the first-order correction, the noise channels with only Hermitian Kraus operators do not change the phases of a unitary operator.
Our method paves the way for the utilization of quantum phase estimation before the advent of fault-tolerant quantum computers.
arXiv Detail & Related papers (2022-08-08T12:46:00Z) - Optimal supplier of single-error-type entanglement via coherent-state
transmission [1.2891210250935146]
We consider protocol that presents single-error-type entanglement for distant qubits via coherent-state transmission over a lossy channel.
This protocol is regarded as a subroutine to serve entanglement for larger protocol to yield a final output, such as ebits or pbits.
arXiv Detail & Related papers (2022-03-31T15:36:54Z) - Correlated quantization for distributed mean estimation and optimization [21.17434087570296]
We propose a correlated quantization protocol whose error guarantee depends on the deviation of data points instead of their absolute range.
We show that applying the proposed protocol as sub-routine in distributed optimization algorithms leads to better convergence rates.
arXiv Detail & Related papers (2022-03-09T18:14:55Z) - Addressing Randomness in Evaluation Protocols for Out-of-Distribution
Detection [1.8047694351309207]
Deep Neural Networks for classification behave unpredictably when confronted with inputs not stemming from the training distribution.
We show that current protocols may fail to provide reliable estimates of the expected performance of OOD methods.
We propose to estimate the performance of OOD methods using a Monte Carlo approach that addresses the randomness.
arXiv Detail & Related papers (2022-03-01T12:06:44Z) - Entanglement purification by counting and locating errors with
entangling measurements [62.997667081978825]
We consider entanglement purification protocols for multiple copies of qubit states.
We use high-dimensional auxiliary entangled systems to learn about number and positions of errors in the noisy ensemble.
arXiv Detail & Related papers (2020-11-13T19:02:33Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.