Optimizing detection of continuous variable entanglement for limited
data
- URL: http://arxiv.org/abs/2211.17168v2
- Date: Mon, 15 Jan 2024 15:04:40 GMT
- Title: Optimizing detection of continuous variable entanglement for limited
data
- Authors: Martin G\"arttner and Tobias Haas and Johannes Noll
- Abstract summary: We consider the scenario of coarse grained measurements, or finite detector resolution, where the values of the Husimi $Q$-distribution are only known on a grid of points in phase space.
We customize our entanglement criteria to maximize the statistical significance of the detection for a given finite number of samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the advantages of a class of entanglement criteria for continuous
variable systems based on the Husimi $Q$-distribution in scenarios with sparse
experimental data. The generality of these criteria allows optimizing them for
a given entangled state and experimental setting. We consider the scenario of
coarse grained measurements, or finite detector resolution, where the values of
the Husimi $Q$-distribution are only known on a grid of points in phase space,
and show how the entanglement criteria can be adapted to this case. Further, we
examine the scenario where experimental measurements amount to drawing
independent samples from the Husimi distribution. Here, we customize our
entanglement criteria to maximize the statistical significance of the detection
for a given finite number of samples. In both scenarios optimization leads to
clear improvements enlarging the class of detected states and the
signal-to-noise ratio of the detection, respectively.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Conditional Sampling of Variational Autoencoders via Iterated
Approximate Ancestral Sampling [7.357511266926065]
Conditional sampling of variational autoencoders (VAEs) is needed in various applications, such as missing data imputation, but is computationally intractable.
A principled choice forally exact conditional sampling is Metropolis-within-Gibbs (MWG)
arXiv Detail & Related papers (2023-08-17T16:08:18Z) - Anomaly Detection with Variance Stabilized Density Estimation [49.46356430493534]
We present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples.
To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution.
We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results.
arXiv Detail & Related papers (2023-06-01T11:52:58Z) - Consistent Optimal Transport with Empirical Conditional Measures [0.6562256987706128]
We consider the problem of Optimal Transportation (OT) between two joint distributions when conditioned on a common variable.
We use kernelized-least-squares terms computed over the joint samples, which implicitly match the transport plan's conditional objective.
Our methodology improves upon state-of-the-art methods when employed in applications like prompt learning for few-shot classification and conditional-generation in the context of predicting cell responses to treatment.
arXiv Detail & Related papers (2023-05-25T10:01:57Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Detecting continuous variable entanglement in phase space with the
$Q$-distribution [0.0]
We prove a class of continuous variable entanglement criteria based on the Husimi $Q$-distribution.
We discuss their generality, which roots in the possibility to optimize over the set of concave functions.
arXiv Detail & Related papers (2022-11-30T17:01:24Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - Variance Reduction for Better Sampling in Continuous Domains [5.675136204504504]
We show that the optimal search distribution might be more peaked around the center of the distribution than the prior distribution.
We provide explicit values for this reshaping of the search distribution depending on the population size.
arXiv Detail & Related papers (2020-04-24T12:25:48Z) - Minimax optimal goodness-of-fit testing for densities and multinomials
under a local differential privacy constraint [3.265773263570237]
We consider the consequences of local differential privacy constraints on goodness-of-fit testing.
We present a test that is adaptive to the smoothness parameter of the unknown density and remains minimax optimal up to a logarithmic factor.
arXiv Detail & Related papers (2020-02-11T08:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.