Shadows and subsystems of generalized probabilistic theories: when tomographic incompleteness is not a loophole for contextuality proofs
- URL: http://arxiv.org/abs/2409.13024v1
- Date: Thu, 19 Sep 2024 18:00:42 GMT
- Title: Shadows and subsystems of generalized probabilistic theories: when tomographic incompleteness is not a loophole for contextuality proofs
- Authors: David Schmid, John H. Selby, Vinicius P. Rossi, Roberto D. Baldijão, Ana Belén Sainz,
- Abstract summary: We show that proofs of the failure of noncontextuality are robust to a very broad class of failures of tomographic completeness.
We also introduce the notion of a shadow of a GPT fragment, which captures the information lost when one's states and effects are unknowingly not tomographic for one another.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is commonly believed that failures of tomographic completeness undermine assessments of nonclassicality in noncontextuality experiments. In this work, we study how such failures can indeed lead to mistaken assessments of nonclassicality. We then show that proofs of the failure of noncontextuality are robust to a very broad class of failures of tomographic completeness, including the kinds of failures that are likely to occur in real experiments. We do so by showing that such proofs actually rely on a much weaker assumption that we term relative tomographic completeness: namely, that one's experimental procedures are tomographic for each other. Thus, the failure of noncontextuality can be established even with coarse-grained, effective, emergent, or virtual degrees of freedom. This also implies that the existence of a deeper theory of nature (beyond that being probed in one's experiment) does not in and of itself pose any challenge to proofs of nonclassicality. To prove these results, we first introduce a number of useful new concepts within the framework of generalized probabilistic theories (GPTs). Most notably, we introduce the notion of a GPT subsystem, generalizing a range of preexisting notions of subsystems (including those arising from tensor products, direct sums, decoherence processes, virtual encodings, and more). We also introduce the notion of a shadow of a GPT fragment, which captures the information lost when one's states and effects are unknowingly not tomographic for one another.
Related papers
- No Free Lunch: Fundamental Limits of Learning Non-Hallucinating Generative Models [14.535583931446807]
We develop a theoretical framework to analyze the learnability of non-hallucinating generative models.
We show that incorporating inductive biases aligned with the actual facts into the learning process is essential.
arXiv Detail & Related papers (2024-10-24T23:57:11Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - On the Convergence of Gradient Descent for Large Learning Rates [55.33626480243135]
We show that convergence is impossible when a fixed step size is used.
We provide a proof of this in the case of linear neural networks with a squared loss.
We also prove the impossibility of convergence for more general losses without requiring strong assumptions such as Lipschitz continuity for the gradient.
arXiv Detail & Related papers (2024-02-20T16:01:42Z) - Addressing some common objections to generalized noncontextuality [0.0]
We respond to criticisms about the definition of generalized noncontextuality and the possibility of testing it experimentally.
One objection is that the existence of a classical record of which laboratory procedure was actually performed in each run of an experiment implies that the operational equivalence relations that are a necessary ingredient of any proof of the failure of noncontextuality do not hold.
arXiv Detail & Related papers (2023-02-14T19:00:04Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Accessible fragments of generalized probabilistic theories, cone equivalence, and applications to witnessing nonclassicality [0.7421845364041001]
We consider the question of how to provide a GPT-like characterization of a particular experimental setup within a given physical theory.
We show that the resulting characterization is not generally a GPT in and of itself-rather, it is described by a more general mathematical object.
We prove that neither incompatibility among measurements nor the assumption of freedom of choice is necessary for witnessing failures of generalized noncontextuality.
arXiv Detail & Related papers (2021-12-08T19:00:23Z) - Contextuality without incompatibility [0.7421845364041001]
We show that measurement incompatibility is neither necessary nor sufficient for proofs of the failure of generalized noncontextuality.
We show that every proof of the failure of generalized noncontextuality in a quantum prepare-measure scenario can be converted into a proof of the failure of generalized noncontextuality in a corresponding scenario with no incompatible measurements.
arXiv Detail & Related papers (2021-06-16T18:00:04Z) - Unitary Interactions Do Not Yield Outcomes: Attempting to Model
"Wigner's Friend" [0.0]
An experiment by Proietti it et al purporting to instantiate the Wigner's Friend' thought experiment is discussed.
It is pointed out that the stated implications of the experiment regarding the alleged irreconcilability of facts attributed to different observers warrant critical review.
arXiv Detail & Related papers (2021-05-04T21:38:41Z) - Exploring Simple Siamese Representation Learning [68.37628268182185]
We show that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.
Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing.
arXiv Detail & Related papers (2020-11-20T18:59:33Z) - Experimental certification of nonclassicality via phase-space
inequalities [58.720142291102135]
We present the first experimental implementation of the recently introduced phase-space inequalities for nonclassicality certification.
We demonstrate the practicality and sensitivity of this approach by studying nonclassicality of a family of noisy and lossy quantum states of light.
arXiv Detail & Related papers (2020-10-01T09:03:52Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.