Enhanced observable estimation through classical optimization of
informationally over-complete measurement data -- beyond classical shadows
- URL: http://arxiv.org/abs/2401.18049v1
- Date: Wed, 31 Jan 2024 18:13:42 GMT
- Title: Enhanced observable estimation through classical optimization of
informationally over-complete measurement data -- beyond classical shadows
- Authors: Joonas Malmi, Keijo Korhonen, Daniel Cavalcanti, Guillermo
Garc\'ia-P\'erez
- Abstract summary: We propose a method to optimize the dual POVM operators after the measurements have been carried out.
We show that it can significantly reduce statistical errors with respect to canonical duals on multiple observable estimations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, informationally complete measurements have attracted
considerable attention, especially in the context of classical shadows. In the
particular case of informationally over-complete measurements, for which the
number of possible outcomes exceeds the dimension of the space of linear
operators in Hilbert space, the dual POVM operators used to interpret the
measurement outcomes are not uniquely defined. In this work, we propose a
method to optimize the dual operators after the measurements have been carried
out in order to produce sharper, unbiased estimations of observables of
interest. We discuss how this procedure can produce zero-variance estimations
in cases where the classical shadows formalism, which relies on so-called
canonical duals, incurs exponentially large measurement overheads. We also
analyze the algorithm in the context of quantum simulation with randomized
Pauli measurements, and show that it can significantly reduce statistical
errors with respect to canonical duals on multiple observable estimations.
Related papers
- In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - Scalable and consistent embedding of probability measures into Hilbert spaces via measure quantization [1.6385815610837167]
We study two methods based on measure quantization for approximating input probability measures with discrete measures of small-support size.
We study the consistency of such approximations, and its implication for scalable embeddings of probability measures into a Hilbert space at a low computational cost.
arXiv Detail & Related papers (2025-02-07T13:23:40Z) - Dual frame optimization for informationally complete quantum measurements [0.0]
We introduce novel classes of parametrized frame superoperators and optimization-free dual frames based on empirical frequencies.
Remarkably, this comes at almost no quantum or classical cost, thus rendering dual frame optimization a valuable addition to the randomized measurement toolbox.
arXiv Detail & Related papers (2024-01-31T18:49:03Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Postselection-free learning of measurement-induced quantum dynamics [0.0]
We introduce a general-purpose scheme that can be used to infer any property of the post-measurement ensemble of states.
As an immediate application, we show that our method can be used to verify the emergence of quantum state designs in experiments.
arXiv Detail & Related papers (2023-10-06T11:06:06Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Shallow shadows: Expectation estimation using low-depth random Clifford circuits [0.39583175274885335]
We present a depth-modulated randomized measurement scheme that interpolates between two known classical shadows schemes.
We focus on the regime where depth scales logarithmically in n and provide evidence that this retains the desirable properties of both extremal schemes.
We present methods for two key tasks; estimating expectation values of certain observables from generated classical shadows and, computing upper bounds on the depth-modulated shadow norm.
arXiv Detail & Related papers (2022-09-26T18:01:19Z) - Estimating Quantum Hamiltonians via Joint Measurements of Noisy
Non-Commuting Observables [0.0]
We introduce a method for performing a single joint measurement that can be implemented locally.
We derive bounds on the number of experimental repetitions required to estimate energies up to a certain precision.
We adapt the joint measurement strategy to minimise the sample complexity when the implementation of measurements is assumed noisy.
arXiv Detail & Related papers (2022-06-17T17:42:54Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Quantum probes for universal gravity corrections [62.997667081978825]
We review the concept of minimum length and show how it induces a perturbative term appearing in the Hamiltonian of any quantum system.
We evaluate the Quantum Fisher Information in order to find the ultimate bounds to the precision of any estimation procedure.
Our results show that quantum probes are convenient resources, providing potential enhancement in precision.
arXiv Detail & Related papers (2020-02-13T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.