Contextuality, Fine-Tuning, and Teleological Explanation
- URL: http://arxiv.org/abs/2110.15898v2
- Date: Mon, 1 Nov 2021 14:11:12 GMT
- Title: Contextuality, Fine-Tuning, and Teleological Explanation
- Authors: Emily Adlam
- Abstract summary: I argue that contextuality is best thought of in terms of fine-tuning.
This behaviour can be understood as a manifestation of teleological features of physics.
I show that measurement contextuality can be explained by appeal to a global constraint forbidding closed causal loops.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: I assess various proposals for the source of the intuition that there is
something problematic about contextuality, ultimately concluding that
contextuality is best thought of in terms of fine-tuning. I then argue that as
with other fine-tuning problems in quantum mechanics, this behaviour can be
understood as a manifestation of teleological features of physics. Finally I
discuss several formal mathematical frameworks that have been used to analyse
contextuality and consider how their results should be interpreted by
scientific realists. In the course of this discussion I obtain several new
mathematical results - I demonstrate that preparation contextuality is a form
of fine-tuning, I show that measurement contextuality can be explained by
appeal to a global constraint forbidding closed causal loops, and I demonstrate
how negative probabilities can arise from a classical ontological model
together with an epistemic restriction.
Related papers
- An algebraic characterisation of Kochen-Specker contextuality [0.0]
Contextuality is a key distinguishing feature between classical and quantum physics.
It expresses a fundamental obstruction to describing quantum theory using classical concepts.
Different frameworks address different aspects of the phenomenon, yet their precise relationship often remains unclear.
arXiv Detail & Related papers (2024-08-29T17:58:12Z) - EventGround: Narrative Reasoning by Grounding to Eventuality-centric Knowledge Graphs [41.928535719157054]
We propose an initial comprehensive framework called EventGround to tackle the problem of grounding free-texts to eventuality-centric knowledge graphs.
We provide simple yet effective parsing and partial information extraction methods to tackle these problems.
Our framework, incorporating grounded knowledge, achieves state-of-the-art performance while providing interpretable evidence.
arXiv Detail & Related papers (2024-03-30T01:16:37Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Corrected Bell and Noncontextuality Inequalities for Realistic Experiments [1.099532646524593]
Contextuality is a feature of quantum correlations.
It is crucial from a foundational perspective as a nonclassical phenomenon, and from an applied perspective as a resource for quantum advantage.
We prove the continuity of a known measure of contextuality, the contextual fraction, which ensures its robustness to noise.
We then bound the extent to which these relaxations can account for contextuality, culminating in a notion of genuine contextuality, which is robust to experimental imperfections.
arXiv Detail & Related papers (2023-10-30T09:43:39Z) - Learnability with PAC Semantics for Multi-agent Beliefs [38.88111785113001]
The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
Valiant recognised that the challenge of learning should be integrated with deduction.
Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries.
arXiv Detail & Related papers (2023-06-08T18:22:46Z) - A Measure-Theoretic Axiomatisation of Causality [55.6970314129444]
We argue in favour of taking Kolmogorov's measure-theoretic axiomatisation of probability as the starting point towards an axiomatisation of causality.
Our proposed framework is rigorously grounded in measure theory, but it also sheds light on long-standing limitations of existing frameworks.
arXiv Detail & Related papers (2023-05-19T13:15:48Z) - Quantum realism: axiomatization and quantification [77.34726150561087]
We build an axiomatization for quantum realism -- a notion of realism compatible with quantum theory.
We explicitly construct some classes of entropic quantifiers that are shown to satisfy almost all of the proposed axioms.
arXiv Detail & Related papers (2021-10-10T18:08:42Z) - On the Quantum-like Contextuality of Ambiguous Phrases [2.6381163133447836]
We show that meaning combinations in ambiguous phrases can be modelled in the sheaf-theoretic framework for quantum contextuality.
Using the framework of Contextuality-by-Default (CbD), we explore the probabilistic variants of these and show that CbD-contextuality is also possible.
arXiv Detail & Related papers (2021-07-19T13:23:42Z) - From the problem of Future Contingents to Peres-Mermin square
experiments: An introductory review to Contextuality [0.0]
We study the historical emergence of the concept from philosophical and logical issues.
We present and compare the main theoretical frameworks that have been derived.
We focus on the complex task of establishing experimental tests of contextuality.
arXiv Detail & Related papers (2021-05-28T13:33:39Z) - On Possibilistic Conditions to Contextuality and Nonlocality [0.0]
We provide some insights into logical contextuality and inequality-free proofs.
We show the existence of possibilistic paradoxes whose occurrence is a necessary and sufficient condition for logical contextuality.
arXiv Detail & Related papers (2020-11-08T23:52:40Z) - Enforcing Interpretability and its Statistical Impacts: Trade-offs
between Accuracy and Interpretability [30.501012698482423]
There has been no formal study of the statistical cost of interpretability in machine learning.
We model the act of enforcing interpretability as that of performing empirical risk minimization over the set of interpretable hypotheses.
We perform a case analysis, explaining why one may or may not observe a trade-off between accuracy and interpretability when the restriction to interpretable classifiers does or does not come at the cost of some excess statistical risk.
arXiv Detail & Related papers (2020-10-26T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.