Simulation of Non-Ordinary Consciousness
- URL: http://arxiv.org/abs/2503.23245v1
- Date: Sat, 29 Mar 2025 23:04:04 GMT
- Title: Simulation of Non-Ordinary Consciousness
- Authors: Khalid M. Saqr,
- Abstract summary: generative symbolic interface designed to simulate psilocybin-like symbolic cognition.<n>Glyph consistently generates high-entropy, metaphor-saturated, and ego-dissolving language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The symbolic architecture of non-ordinary consciousness remains largely unmapped in cognitive science and artificial intelligence. While conventional models prioritize rational coherence, altered states such as those induced by psychedelics reveal distinct symbolic regimes characterized by recursive metaphor, ego dissolution, and semantic destabilization. We present \textit{Glyph}, a generative symbolic interface designed to simulate psilocybin-like symbolic cognition in large language models. Rather than modeling perception or mood, Glyph enacts symbolic transformation through recursive reentry, metaphoric modulation, and entropy-scaled destabilization -- a triadic operator formalized within a tensorial linguistic framework. Experimental comparison with baseline GPT-4o reveals that Glyph consistently generates high-entropy, metaphor-saturated, and ego-dissolving language across diverse symbolic prompt categories. These results validate the emergence of non-ordinary cognitive patterns and support a new paradigm for simulating altered consciousness through language. Glyph opens novel pathways for modeling symbolic cognition, exploring metaphor theory, and encoding knowledge in recursively altered semantic spaces.
Related papers
- Objective-Free Local Learning and Emergent Language Structure in Thinking Machines [0.0]
We present a neuro-symbolic framework for generative language modeling based on local, event-driven emergent learning.<n>At its core is a hierarchical Hopfield memory chain acting as a compositional short-term memory and dynamic tokenizer.<n>We demonstrate that briefly activating a new neuron during inference binds distributed multi-scale token features into a symbolic embedding.
arXiv Detail & Related papers (2025-06-29T15:29:13Z) - Theoretical Foundations for Semantic Cognition in Artificial Intelligence [0.0]
monograph presents a modular cognitive architecture for artificial intelligence grounded in the formal modeling of belief as structured semantic state.
Belief states are defined as dynamic ensembles of linguistic expressions embedded within a navigable manifold, where operators enable assimilation, abstraction, nullification, memory, and introspection.
arXiv Detail & Related papers (2025-04-29T23:10:07Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Shadow of the (Hierarchical) Tree: Reconciling Symbolic and Predictive Components of the Neural Code for Syntax [1.223779595809275]
I discuss the prospects of reconciling the neural code for hierarchical'vertical' syntax with linear and predictive 'horizontal' processes.<n>I provide a neurosymbolic mathematical model for how to inject symbolic representations into a neural regime encoding lexico-semantic statistical features.
arXiv Detail & Related papers (2024-12-02T08:44:16Z) - Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - What Makes a Maze Look Like a Maze? [92.80800000328277]
We introduce Deep Grounding (DSG), a framework that leverages explicit structured representations of visual abstractions for grounding and reasoning.
At the core of DSG are schemas--dependency graph descriptions of abstract concepts that decompose them into more primitive-level symbols.
We show that DSG significantly improves the abstract visual reasoning performance of vision-language models.
arXiv Detail & Related papers (2024-09-12T16:41:47Z) - Neural Semantic Parsing with Extremely Rich Symbolic Meaning Representations [7.774674200374255]
We introduce a novel compositional symbolic representation for concepts based on their position in the taxonomical hierarchy.
This representation provides richer semantic information and enhances interpretability.
Our experimental findings demonstrate that the taxonomical model, trained on much richer and complex meaning representations, is slightly subordinate in performance to the traditional model using the standard metrics for evaluation, but outperforms it when dealing with out-of-vocabulary concepts.
arXiv Detail & Related papers (2024-04-19T08:06:01Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Metaphorical Language Change Is Self-Organized Criticality [0.0]
The paper argues that metaphorical language change qualifies as a self-organized criticality state.
It provides a statistical profile of metaphorical constructions and intrinsic generative rules with antecedent conditions.
arXiv Detail & Related papers (2022-11-19T14:38:38Z) - Emergent Graphical Conventions in a Visual Communication Game [80.79297387339614]
Humans communicate with graphical sketches apart from symbolic languages.
We take the very first step to model and simulate such an evolution process via two neural agents playing a visual communication game.
We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions.
arXiv Detail & Related papers (2021-11-28T18:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.