Semantic Attractors and the Emergence of Meaning: Towards a Teleological Model of AGI
- URL: http://arxiv.org/abs/2508.18290v1
- Date: Thu, 21 Aug 2025 19:57:52 GMT
- Title: Semantic Attractors and the Emergence of Meaning: Towards a Teleological Model of AGI
- Authors: Hans-Joachim Rudolph,
- Abstract summary: This essay develops a theoretical framework for a semantic Artificial General Intelligence (AGI) based on the notion of semantic attractors in complex meaning spaces.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This essay develops a theoretical framework for a semantic Artificial General Intelligence (AGI) based on the notion of semantic attractors in complex-valued meaning spaces. Departing from current transformer-based language models, which operate on statistical next-token prediction, we explore a model in which meaning is not inferred probabilistically but formed through recursive tensorial transformation. Using cyclic operations involving the imaginary unit \emph{i}, we describe a rotational semantic structure capable of modeling irony, homonymy, and ambiguity. At the center of this model, however, is a semantic attractor -- a teleological operator that, unlike statistical computation, acts as an intentional agent (Microvitum), guiding meaning toward stability, clarity, and expressive depth. Conceived in terms of gradient flows, tensor deformations, and iterative matrix dynamics, the attractor offers a model of semantic transformation that is not only mathematically suggestive, but also philosophically significant. We argue that true meaning emerges not from simulation, but from recursive convergence toward semantic coherence, and that this requires a fundamentally new kind of cognitive architecture -- one designed to shape language, not just predict it.
Related papers
- Visual Language Hypothesis [14.062822951292402]
We study visual representation learning from a structural and topological perspective.<n>We show that approximating the quotient also places structural demands on the model architecture.
arXiv Detail & Related papers (2025-12-29T09:43:10Z) - Determinism and Indeterminism as Model Artefacts: Toward a Model-Invariant Ontology of Physics [0.0]
I argue that the traditional opposition between determinism and indeterminism in physics is representational rather than ontological.<n>I use this model-equivalence to motivate a model-invariance criterion for ontological commitment.<n>I offer a realist for modern physics that combines empirical openness with resistance to metaphysical overreach.
arXiv Detail & Related papers (2025-12-27T09:41:03Z) - A Mathematical Explanation of Transformers for Large Language Models and GPTs [6.245431127481903]
We propose a novel continuous framework that interprets the Transformer as a discretization of a structured integro-differential equation.<n>Within this formulation, the self-attention mechanism emerges naturally as a non-local integral operator.<n>Our approach extends beyond previous theoretical analyses by embedding the entire Transformer operation in continuous domains.
arXiv Detail & Related papers (2025-10-05T01:16:08Z) - Beyond the Wavefunction: Qualia Abstraction Language Mechanics and the Grammar of Awareness [0.0]
We propose a formal reconstruction of quantum mechanics grounded in the structured dynamics of subjective experience.<n>The Qualia Abstraction Language (QAL) models physical systems as evolving streams of introspective units.<n>We analyze the alignment of QAL with endophysical approaches, contrast it with standard interpretations of quantum theory, and explore its implications for a post Platonist, introspectively grounded physics.
arXiv Detail & Related papers (2025-08-03T15:07:24Z) - Loss-Complexity Landscape and Model Structure Functions [56.01537787608726]
We develop a framework for dualizing the Kolmogorov structure function $h_x(alpha)$.<n>We establish a mathematical analogy between information-theoretic constructs and statistical mechanics.<n>We explicitly prove the Legendre-Fenchel duality between the structure function and free energy.
arXiv Detail & Related papers (2025-07-17T21:31:45Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - A Free Probabilistic Framework for Analyzing the Transformer-based Language Models [19.78896931593813]
We present a formal operator-theoretic framework for analyzing Transformer-based language models using free probability theory.<n>This work offers a principled, though theoretical, perspective on structural dynamics in large language models.
arXiv Detail & Related papers (2025-06-19T19:13:02Z) - Alpay Algebra III: Observer-Coupled Collapse and the Temporal Drift of Identity [0.0]
Third installment formalizes the observer-coupled phi-collapse process through transfinite categorical flows and curvature-driven identity operators.<n>System surpasses conventional identity modeling in explainable AI (XAI) by encoding internal transformation history into a symbolic fixed-point structure.<n>Results also offer a mathematically rigorous basis for future AI systems with stable self-referential behavior.
arXiv Detail & Related papers (2025-05-26T10:20:12Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.