Non-adaptive Heisenberg-limited metrology with multi-channel homodyne
measurements
- URL: http://arxiv.org/abs/2110.03582v1
- Date: Thu, 7 Oct 2021 16:03:38 GMT
- Title: Non-adaptive Heisenberg-limited metrology with multi-channel homodyne
measurements
- Authors: Danilo Triggiani, Paolo Facchi, Vincenzo Tamma
- Abstract summary: We show a protocol achieving the ultimate Heisenberg-scaling sensitivity in the estimation of a parameter encoded in a generic linear network.
As a result, this protocol does not require a prior coarse estimation of the parameter, nor an adaptation of the network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show a protocol achieving the ultimate Heisenberg-scaling sensitivity in
the estimation of a parameter encoded in a generic linear network, without
employing any auxiliary networks, and without the need of any prior information
on the parameter nor on the network structure. As a result, this protocol does
not require a prior coarse estimation of the parameter, nor an adaptation of
the network. The scheme we analyse consists of a single-mode squeezed state and
homodyne detectors in each of the $M$ output channels of the network encoding
the parameter, making it feasible for experimental applications.
Related papers
- Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - A Directed-Evolution Method for Sparsification and Compression of Neural
Networks with Application to Object Identification and Segmentation and
considerations of optimal quantization using small number of bits [0.0]
This work introduces Directed-Evolution method for sparsification of neural networks.
The relevance of parameters to the network accuracy is directly assessed.
The parameters that produce the least effect on accuracy when tentatively zeroed are indeed zeroed.
arXiv Detail & Related papers (2022-06-12T23:49:08Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - NOTMAD: Estimating Bayesian Networks with Sample-Specific Structures and
Parameters [70.55488722439239]
We present NOTMAD, which learns to mix archetypal networks according to sample context.
We demonstrate the utility of NOTMAD and sample-specific network inference through analysis and experiments, including patient-specific gene expression networks.
arXiv Detail & Related papers (2021-11-01T17:17:34Z) - Blind Coherent Preamble Detection via Neural Networks [2.2063018784238984]
We propose a neural network (NN) sequence detector and timing advanced estimator.
We do not replace the whole process of preamble detection by a NN.
We propose to use NN only for textitblind coherent combining of the signals in the detector to compensate for the channel effect.
arXiv Detail & Related papers (2021-09-30T09:53:49Z) - Semiparametric Bayesian Networks [5.205440005969871]
We introduce semiparametric Bayesian networks that combine parametric and nonparametric conditional probability distributions.
Their aim is to incorporate the bounded complexity of parametric models and the flexibility of nonparametric ones.
arXiv Detail & Related papers (2021-09-07T11:47:32Z) - Heisenberg scaling precision in the estimation of functions of
parameters [0.0]
We propose a metrological strategy reaching Heisenberg scaling precision in the estimation of functions of any number $l$ of arbitrary parameters encoded in a generic $M$-channel linear network.
Two auxiliary linear network are required and their role is twofold: to refocus the signal into a single channel after the interaction with the interferometer, and to fix the function of the parameters to be estimated according to the linear network analysed.
arXiv Detail & Related papers (2021-03-15T17:28:15Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Typicality of Heisenberg scaling precision in multi-mode quantum
metrology [0.0]
We propose a measurement setup reaching Heisenberg scaling precision for the estimation of any parameter $varphi$ encoded into a generic $M$-port linear network.
We show that, for large values of $M$ and a random (unbiased) choice of the non-adapted stage, this pre-factor takes a typical value which can be controlled through the encoding of the parameter $varphi$ into the linear network.
arXiv Detail & Related papers (2020-03-27T17:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.