Error trade-off relations for two-parameter unitary model with commuting
generators
- URL: http://arxiv.org/abs/2010.00789v1
- Date: Fri, 2 Oct 2020 05:22:24 GMT
- Title: Error trade-off relations for two-parameter unitary model with commuting
generators
- Authors: Shin Funada, Jun Suzuki
- Abstract summary: We show that the error trade-off relation which exists in our models of a finite dimension system is a generic phenomenon.
We analyze a qutrit system to show that there can be an error trade-off relation given by the SLD and RLD Cramer-Rao bounds.
- Score: 21.22196305592545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate whether a trade-off relation between the diagonal elements of
the mean square error matrix exists for the two-parameter unitary models with
mutually commuting generators. We show that the error trade-off relation which
exists in our models of a finite dimension system is a generic phenomenon in
the sense that it occurs with a finite volume in the spate space. We analyze a
qutrit system to show that there can be an error trade-off relation given by
the SLD and RLD Cramer-Rao bounds that intersect each other. First, we analyze
an example of the reference state showing the non-trivial trade-off relation
numerically, and find that its eigenvalues must be in a certain range to
exhibit the trade-off relation. For another example, one-parameter family of
reference states, we analytically show that the non-trivial relation always
exists and that the range where the trade-off relation exists is up to about a
half of the possible range.
Related papers
- Effect of Correlated Errors on Quantum Memory [1.3198143828338362]
We introduce a classical correlation model based on hidden random fields for modeling i.i.d. errors with long-range correlations.
We show that this proposed model can capture certain correlation patterns not captured by the joint (system and bath) Hamiltonian model with pairwise terms.
arXiv Detail & Related papers (2024-08-16T14:59:10Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Error tradeoff relation for estimating the unitary-shift parameter of a
relativistic spin-1/2 particle [20.44391436043906]
The purpose of this paper is to discuss the existence of a nontrivial tradeoff relation for estimating two unitary-shift parameters in a relativistic spin-1/2 system.
It is shown that any moving observer cannot estimate two parameters simultaneously, even though a parametric model is classical in the rest frame.
arXiv Detail & Related papers (2023-08-01T17:07:29Z) - Linear Causal Disentanglement via Interventions [8.444187296409051]
Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model.
We study observed variables that are a linear transformation of a linear latent causal model.
arXiv Detail & Related papers (2022-11-29T18:43:42Z) - A Unified Analysis of Multi-task Functional Linear Regression Models
with Manifold Constraint and Composite Quadratic Penalty [0.0]
The power of multi-task learning is brought in by imposing additional structures over the slope functions.
We show the composite penalty induces a specific norm, which helps to quantify the manifold curvature.
A unified convergence upper bound is obtained and specifically applied to the reduced-rank model and the graph Laplacian regularized model.
arXiv Detail & Related papers (2022-11-09T13:32:23Z) - Monotonic Risk Relationships under Distribution Shifts for Regularized
Risk Minimization [24.970274256061376]
Machine learning systems are often applied to data that is drawn from a different distribution than the training distribution.
Recent work has shown that for a variety of classification and signal reconstruction problems, the out-of-distribution performance is strongly linearly correlated with the in-distribution performance.
arXiv Detail & Related papers (2022-10-20T21:01:14Z) - Boundary theories of critical matchgate tensor networks [59.433172590351234]
Key aspects of the AdS/CFT correspondence can be captured in terms of tensor network models on hyperbolic lattices.
For tensors fulfilling the matchgate constraint, these have previously been shown to produce disordered boundary states.
We show that these Hamiltonians exhibit multi-scale quasiperiodic symmetries captured by an analytical toy model.
arXiv Detail & Related papers (2021-10-06T18:00:03Z) - Models of zero-range interaction for the bosonic trimer at unitarity [91.3755431537592]
We present the construction of quantum Hamiltonians for a three-body system consisting of identical bosons mutually coupled by a two-body interaction of zero range.
For a large part of the presentation, infinite scattering length will be considered.
arXiv Detail & Related papers (2020-06-03T17:54:43Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.