Storage and retrieval of von Neumann measurements
- URL: http://arxiv.org/abs/2204.03029v2
- Date: Mon, 14 Nov 2022 19:11:52 GMT
- Title: Storage and retrieval of von Neumann measurements
- Authors: Paulina Lewandowska, Ryszard Kukulski, {\L}ukasz Pawela, Zbigniew
Pucha{\l}a
- Abstract summary: This work examines the problem of learning an unknown von Neumann measurement of dimension $d$ from a finite number of copies.
Our main goal is to estimate the behavior of the maximum value of the average fidelity function $F_d$ for a general $N rightarrow 1$ learning scheme.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work examines the problem of learning an unknown von Neumann measurement
of dimension $d$ from a finite number of copies. To obtain a faithful
approximation of the given measurement we are allowed to use it $N$ times. Our
main goal is to estimate the asymptotic behavior of the maximum value of the
average fidelity function $F_d$ for a general $N \rightarrow 1$ learning
scheme. We show that $F_d = 1 - \Theta\left(\frac{1}{N^2}\right)$ for arbitrary
but fixed dimension $d$. In addition to that, we compared various learning
schemes for $d=2$. We observed that the learning scheme based on deterministic
port-based teleportation is asymptotically optimal but performs poorly for low
$N$. In particular, we discovered a parallel learning scheme, which despite its
lack of asymptotic optimality, provides a high value of the fidelity for low
values of $N$ and uses only two-qubit entangled memory states.
Related papers
- Storage and retrieval of von Neumann measurements via indefinite causal order structures [1.9506923346234724]
This work presents the problem of learning an unknown von Neumann measurement of dimension $d$ using indefinite causal structures.
We use formalism of process matrices to store information about the given measurement.
We show that for the qubit von Neumann measurements using indefinite causal learning structures provide better approximation than quantum networks.
arXiv Detail & Related papers (2024-05-18T07:11:57Z) - Memory Efficient And Minimax Distribution Estimation Under Wasserstein
Distance Using Bayesian Histograms [6.21295508577576]
We show that when $d 2v$, histograms possess a special textitmemory efficiency property, in reference to the sample size $n, order $nd/2v$ bins are needed to obtain minimax rate optimality.
The attained memory footprint overcomes existing minimax optimal procedures by a factor in $n$; for example an $n1 - d/2v$ factor reduction in the footprint when compared to the empirical measure, a minimax estimator in the Borel probability measure class.
arXiv Detail & Related papers (2023-07-19T16:13:20Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - Nearly Minimax Optimal Reinforcement Learning with Linear Function
Approximation [25.60689712525918]
We study reinforcement learning with linear function approximation where the transition probability and reward functions are linear.
We propose a novel-efficient algorithm, LSVI-UCB$+$, which achieves an $widetildeO(HdsqrtT)$ regret bound where $H$ is the episode length, $d$ is the feature dimension, and $T$ is the number of steps.
arXiv Detail & Related papers (2022-06-23T06:04:21Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Localization in 1D non-parametric latent space models from pairwise
affinities [6.982738885923206]
We consider the problem of estimating latent positions in a one-dimensional torus from pairwise affinities.
We introduce an estimation procedure that provably localizes all the latent positions with a maximum error of the order of $sqrtlog(n)/n$, with high-probability.
arXiv Detail & Related papers (2021-08-06T13:05:30Z) - Provably Breaking the Quadratic Error Compounding Barrier in Imitation
Learning, Optimally [58.463668865380946]
We study the statistical limits of Imitation Learning in episodic Markov Decision Processes (MDPs) with a state space $mathcalS$.
We establish an upper bound $O(|mathcalS|H3/2/N)$ for the suboptimality using the Mimic-MD algorithm in Rajaraman et al ( 2020)
We show the minimax suboptimality grows as $Omega( H3/2/N)$ when $mathcalS|geq 3$ while the unknown-transition setting suffers from a larger sharp rate
arXiv Detail & Related papers (2021-02-25T15:50:19Z) - A One-bit, Comparison-Based Gradient Estimator [29.600975900977343]
We show how one can use tools from one-bit compressed sensing to construct a robust and reliable estimator of the normalized gradient.
We propose an algorithm, coined SCOBO, that uses this estimator within a gradient descent scheme.
arXiv Detail & Related papers (2020-10-06T05:01:38Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.