Turing Video-based Cognitive Tests to Handle Entangled Concepts
- URL: http://arxiv.org/abs/2409.08868v1
- Date: Fri, 13 Sep 2024 14:30:55 GMT
- Title: Turing Video-based Cognitive Tests to Handle Entangled Concepts
- Authors: Diederik Aerts, Roberto Leporini, Sandro Sozzo,
- Abstract summary: We present the results of an innovative video-based cognitive test on a specific conceptual combination.
We show that collected data can be faithfully modelled within a quantum-theoretic framework.
We provide a novel explanation for the appearance of entanglement in both physics and cognitive realms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We have proved in both human-based and computer-based tests that natural concepts generally `entangle' when they combine to form complex sentences, violating the rules of classical compositional semantics. In this article, we present the results of an innovative video-based cognitive test on a specific conceptual combination, which significantly violates the Clauser--Horne--Shimony--Holt version of Bell's inequalities (`CHSH inequality'). We also show that collected data can be faithfully modelled within a quantum-theoretic framework elaborated by ourselves and a `strong form of entanglement' occurs between the component concepts. While the video-based test confirms previous empirical results on entanglement in human cognition, our ground-breaking empirical approach surpasses language barriers and eliminates the need for prior knowledge, enabling universal accessibility. Finally, this transformative methodology allows one to unravel the underlying connections that drive our perception of reality. As a matter of fact, we provide a novel explanation for the appearance of entanglement in both physics and cognitive realms.
Related papers
- Provable Compositional Generalization for Object-Centric Learning [55.658215686626484]
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally.
arXiv Detail & Related papers (2023-10-09T01:18:07Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods [5.87677276882675]
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can'reason' on text or environmental inputs and make inferences beyond perception.
In this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels.
arXiv Detail & Related papers (2022-11-30T08:44:30Z) - Quantum Structure in Human Perception [0.0]
We investigate the ways in which the quantum structures of superposition, contextuality, and entanglement have their origins in human perception itself.
Our analysis takes us from a simple quantum measurement model, along how human perception incorporates the warping mechanism of categorical perception.
arXiv Detail & Related papers (2022-08-07T13:59:23Z) - When are Post-hoc Conceptual Explanations Identifiable? [18.85180188353977]
When no human concept labels are available, concept discovery methods search trained embedding spaces for interpretable concepts.
We argue that concept discovery should be identifiable, meaning that a number of known concepts can be provably recovered to guarantee reliability of the explanations.
Our results highlight the strict conditions under which reliable concept discovery without human labels can be guaranteed.
arXiv Detail & Related papers (2022-06-28T10:21:17Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Entanglement in Cognition violating Bell Inequalities Beyond Cirel'son's
Bound [0.0]
We present the results of two tests where a sample of human participants were asked to make judgements about conceptual combinations.
Both tests significantly violate the Clauser-Horne-Shimony-Holt version of Bell inequalities (CHSH inequality')
We show that the observed violations of the CHSH inequality can be explained as a consequence of a strong form of quantum entanglement' between the component conceptual entities.
arXiv Detail & Related papers (2021-02-07T16:57:59Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.