Contextuality Can be Verified with Noncontextual Experiments
- URL: http://arxiv.org/abs/2412.00199v1
- Date: Fri, 29 Nov 2024 19:01:03 GMT
- Title: Contextuality Can be Verified with Noncontextual Experiments
- Authors: Jonathan J. Thio, Wilfred Salmon, Crispin H. W. Barnes, Stephan De Bièvre, David R. M. Arvidsson-Shukur,
- Abstract summary: Quantum states can be represented by KD distributions, which take values in the complex unit disc.
A KD distribution can be measured by a series of weak and projective measurements.
We analyze this connection with respect to mixed KD-positive states that cannot be decomposed as convex combinations of pure KD-positive states.
- Score: 0.0
- License:
- Abstract: We uncover new features of generalized contextuality by connecting it to the Kirkwood-Dirac (KD) quasiprobability distribution. Quantum states can be represented by KD distributions, which take values in the complex unit disc. Only for ``KD-positive'' states are the KD distributions joint probability distributions. A KD distribution can be measured by a series of weak and projective measurements. We design such an experiment and show that it is contextual iff the underlying state is not KD-positive. We analyze this connection with respect to mixed KD-positive states that cannot be decomposed as convex combinations of pure KD-positive states. Our result is the construction of a noncontextual experiment that enables an experimenter to verify contextuality.
Related papers
- Structure, Positivity and Classical Simulability of Kirkwood-Dirac Distributions [0.0]
We study the evolution of the Kirkwood-Dirac quasiprobability distribution.
We identify bounds for pure KD positive states in distributions defined on mutually unbiased bases.
We show that the discrete Fourier transform of KD distributions on qudits in the Fourier basis follows a self-similarity constraint.
arXiv Detail & Related papers (2025-02-17T13:20:10Z) - The Kirkwood-Dirac representation associated to the Fourier transform for finite abelian groups: positivity [0.0]
We construct and study the Kirkwood-Dirac representations naturally associated to the Fourier transform of finite abelian groups $G$.
We identify all pure KD-positive states and all KD-real observables for these KD representations.
arXiv Detail & Related papers (2025-01-21T16:16:55Z) - Convex roofs witnessing Kirkwood-Dirac nonpositivity [0.0]
We construct two witnesses for KD nonpositivity for general mixed states.
Our first witness is the convex roof of the support uncertainty.
Our other witness is the convex roof of the total KD nonpositivity.
arXiv Detail & Related papers (2024-07-05T14:47:32Z) - Properties and Applications of the Kirkwood-Dirac Distribution [0.0]
The KD distribution can represent a quantum state in terms of arbitrary observables.
This paper reviews the KD distribution, in three parts.
We emphasise connections between operational quantum advantages and negative or non-real KD quasi-probabilities.
arXiv Detail & Related papers (2024-03-27T18:00:02Z) - Characterizing the geometry of the Kirkwood-Dirac positive states [0.0]
The Kirkwood-Dirac (KD) quasiprobability distribution can describe any quantum state with respect to the eigenbases of two observables $A$ and $B$.
We show how the full convex set of states with positive KD distributions depends on the eigenbases of $A$ and $B$.
We also investigate if there can exist mixed KD-positive states that cannot be written as convex combinations of pure KD-positive states.
arXiv Detail & Related papers (2023-05-31T18:05:02Z) - On student-teacher deviations in distillation: does it pay to disobey? [54.908344098305804]
Knowledge distillation has been widely used to improve the test accuracy of a "student" network.
Despite being trained to fit the teacher's probabilities, the student may not only significantly deviate from the teacher probabilities, but may also outdo the teacher in performance.
arXiv Detail & Related papers (2023-01-30T14:25:02Z) - Controlling Moments with Kernel Stein Discrepancies [74.82363458321939]
Kernel Stein discrepancies (KSDs) measure the quality of a distributional approximation.
We first show that standard KSDs used for weak convergence control fail to control moment convergence.
We then provide sufficient conditions under which alternative diffusion KSDs control both moment and weak convergence.
arXiv Detail & Related papers (2022-11-10T08:24:52Z) - DensePure: Understanding Diffusion Models towards Adversarial Robustness [110.84015494617528]
We analyze the properties of diffusion models and establish the conditions under which they can enhance certified robustness.
We propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. a classifier)
We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works.
arXiv Detail & Related papers (2022-11-01T08:18:07Z) - Targeted Separation and Convergence with Kernel Discrepancies [61.973643031360254]
kernel-based discrepancy measures are required to (i) separate a target P from other probability measures or (ii) control weak convergence to P.
In this article we derive new sufficient and necessary conditions to ensure (i) and (ii)
For MMDs on separable metric spaces, we characterize those kernels that separate Bochner embeddable measures and introduce simple conditions for separating all measures with unbounded kernels.
arXiv Detail & Related papers (2022-09-26T16:41:16Z) - KDExplainer: A Task-oriented Attention Model for Explaining Knowledge
Distillation [59.061835562314066]
We introduce a novel task-oriented attention model, termed as KDExplainer, to shed light on the working mechanism underlying the vanilla KD.
We also introduce a portable tool, dubbed as virtual attention module (VAM), that can be seamlessly integrated with various deep neural networks (DNNs) to enhance their performance under KD.
arXiv Detail & Related papers (2021-05-10T08:15:26Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.