Noncontextuality inequalities for prepare-transform-measure scenarios
- URL: http://arxiv.org/abs/2407.09624v1
- Date: Fri, 12 Jul 2024 18:20:41 GMT
- Title: Noncontextuality inequalities for prepare-transform-measure scenarios
- Authors: David Schmid, Roberto D. Baldijão, John H. Selby, Ana Belén Sainz, Robert W. Spekkens,
- Abstract summary: We show how linear quantifier elimination can be used to compute a polytope of correlations consistent with generalized noncontextuality.
We also give a simple algorithm for computing all the linear operational identities holding among a given set of states, of transformations, or of measurements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We provide the first systematic technique for deriving witnesses of contextuality in prepare-transform-measure scenarios. More specifically, we show how linear quantifier elimination can be used to compute a polytope of correlations consistent with generalized noncontextuality in such scenarios. This polytope is specified as a set of noncontextuality inequalities that are necessary and sufficient conditions for observed data in the scenario to admit of a classical explanation relative to any linear operational identities, if one ignores some constraints from diagram preservation. While including these latter constraints generally leads to tighter inequalities, it seems that nonlinear quantifier elimination would be required to systematically include them. We also provide a linear program which can certify the nonclassicality of a set of numerical data arising in a prepare-transform-measure experiment. We apply our results to get a robust noncontextuality inequality for transformations that can be violated within the stabilizer subtheory. Finally, we give a simple algorithm for computing all the linear operational identities holding among a given set of states, of transformations, or of measurements.
Related papers
- On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds [11.30047438005394]
This work investigates the question of how to choose the regularization norm $lVert cdot rVert$ in the context of high-dimensional adversarial training for binary classification.
We quantitatively characterize the relationship between perturbation size and the optimal choice of $lVert cdot rVert$, confirming the intuition that, in the data scarce regime, the type of regularization becomes increasingly important for adversarial training as perturbations grow in size.
arXiv Detail & Related papers (2024-10-21T14:53:12Z) - Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Semi-parametric inference based on adaptively collected data [34.56133468275712]
We construct suitably weighted estimating equations that account for adaptivity in data collection.
Our results characterize the degree of "explorability" required for normality to hold.
We illustrate our general theory with concrete consequences for various problems, including standard linear bandits and sparse generalized bandits.
arXiv Detail & Related papers (2023-03-05T00:45:32Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Dimension Free Generalization Bounds for Non Linear Metric Learning [61.193693608166114]
We provide uniform generalization bounds for two regimes -- the sparse regime, and a non-sparse regime.
We show that by relying on a different, new property of the solutions, it is still possible to provide dimension free generalization guarantees.
arXiv Detail & Related papers (2021-02-07T14:47:00Z) - Bounding and simulating contextual correlations in quantum theory [0.0]
We introduce a hierarchy of semidefinite relaxations of the set of quantum correlations in generalised contextuality scenarios.
We use it to determine the maximal quantum violation of several noncontextuality inequalities whose maximum violations were previously unknown.
We then go further and use it to prove that certain preparation-contextual correlations cannot be explained with pure states.
arXiv Detail & Related papers (2020-10-09T18:19:09Z) - Solvable Criterion for the Contextuality of any Prepare-and-Measure
Scenario [0.0]
An operationally noncontextual ontological model of the quantum statistics associated with the prepare-and-measure scenario is constructed.
A mathematical criterion, called unit separability, is formulated as the relevant classicality criterion.
We reformulate our results in the framework of generalized probabilistic theories.
arXiv Detail & Related papers (2020-03-13T18:00:05Z) - The empirical duality gap of constrained statistical learning [115.23598260228587]
We study the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all modern information processing.
We propose to tackle the constrained statistical problem overcoming its infinite dimensionality, unknown distributions, and constraints by leveraging finite dimensional parameterizations, sample averages, and duality theory.
We demonstrate the effectiveness and usefulness of this constrained formulation in a fair learning application.
arXiv Detail & Related papers (2020-02-12T19:12:29Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.