Quantum sensing networks for the estimation of linear functions
- URL: http://arxiv.org/abs/2003.04867v2
- Date: Tue, 19 May 2020 19:07:21 GMT
- Title: Quantum sensing networks for the estimation of linear functions
- Authors: Jes\'us Rubio, Paul A Knott, Timothy J Proctor, Jacob A Dunningham
- Abstract summary: inter-sensor correlations are used in the simultaneous estimation of multiple linear functions.
We show that entanglement can be detrimental for estimating non-trivial global properties.
Our results will serve as a basis to investigate how to harness correlations in networks of quantum sensors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The theoretical framework for networked quantum sensing has been developed to
a great extent in the past few years, but there are still a number of open
questions. Among these, a problem of great significance, both fundamentally and
for constructing efficient sensing networks, is that of the role of
inter-sensor correlations in the simultaneous estimation of multiple linear
functions, where the latter are taken over a collection local parameters and
can thus be seen as global properties. In this work we provide a solution to
this when each node is a qubit and the state of the network is
sensor-symmetric. First we derive a general expression linking the amount of
inter-sensor correlations and the geometry of the vectors associated with the
functions, such that the asymptotic error is optimal. Using this we show that
if the vectors are clustered around two special subspaces, then the optimum is
achieved when the correlation strength approaches its extreme values, while
there is a monotonic transition between such extremes for any other geometry.
Furthermore, we demonstrate that entanglement can be detrimental for estimating
non-trivial global properties, and that sometimes it is in fact irrelevant.
Finally, we perform a non-asymptotic analysis of these results using a Bayesian
approach, finding that the amount of correlations needed to enhance the
precision crucially depends on the number of measurement data. Our results will
serve as a basis to investigate how to harness correlations in networks of
quantum sensors operating both in and out of the asymptotic regime.
Related papers
- Information-Theoretic Generalization Bounds for Deep Neural Networks [22.87479366196215]
Deep neural networks (DNNs) exhibit an exceptional capacity for generalization in practical applications.
This work aims to capture the effect and benefits of depth for supervised learning via information-theoretic generalization bounds.
arXiv Detail & Related papers (2024-04-04T03:20:35Z) - Characterization of partially accessible anisotropic spin chains in the
presence of anti-symmetric exchange [0.0]
We address quantum characterization of anisotropic spin chains in the presence of antisymmetric exchange.
We investigate whether the Hamiltonian parameters of the chain may be estimated with precision approaching the ultimate limit imposed by quantum mechanics.
arXiv Detail & Related papers (2024-01-25T19:26:35Z) - Hessian Eigenvectors and Principal Component Analysis of Neural Network
Weight Matrices [0.0]
This study delves into the intricate dynamics of trained deep neural networks and their relationships with network parameters.
We unveil a correlation between Hessian eigenvectors and network weights.
This relationship, hinging on the magnitude of eigenvalues, allows us to discern parameter directions within the network.
arXiv Detail & Related papers (2023-11-01T11:38:31Z) - Fluctuation based interpretable analysis scheme for quantum many-body
snapshots [0.0]
Microscopically understanding and classifying phases of matter is at the heart of strongly-correlated quantum physics.
Here, we combine confusion learning with correlation convolutional neural networks, which yields fully interpretable phase detection.
Our work opens new directions in interpretable quantum image processing being sensible to long-range order.
arXiv Detail & Related papers (2023-04-12T17:59:59Z) - Typical Correlation Length of Sequentially Generated Tensor Network
States [0.0]
We focus on spins with local interactions, whose correlations are extremely well captured by tensor network states.
We define ensembles of random tensor network states in one and two spatial dimensions that admit a sequential generation.
We observe the consistent emergence of a correlation length that depends only on the underlying spatial dimension and not the considered measure.
arXiv Detail & Related papers (2023-01-11T18:24:45Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.