Interpretation as Linear Transformation: A Cognitive-Geometric Model of Belief and Meaning
- URL: http://arxiv.org/abs/2512.09831v1
- Date: Wed, 10 Dec 2025 17:13:01 GMT
- Title: Interpretation as Linear Transformation: A Cognitive-Geometric Model of Belief and Meaning
- Authors: Chainarong Amornbunchornvej,
- Abstract summary: I show how belief distortion, motivational drift, counterfactual evaluation, and the limits of mutual understanding arise from purely algebraic constraints.<n>I argue that this cognitive-geometric perspective clarifies the boundaries of influence in both human and artificial systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops a geometric framework for modeling belief, motivation, and influence across cognitively heterogeneous agents. Each agent is represented by a personalized value space, a vector space encoding the internal dimensions through which the agent interprets and evaluates meaning. Beliefs are formalized as structured vectors-abstract beings-whose transmission is mediated by linear interpretation maps. A belief survives communication only if it avoids the null spaces of these maps, yielding a structural criterion for intelligibility, miscommunication, and belief death. Within this framework, I show how belief distortion, motivational drift, counterfactual evaluation, and the limits of mutual understanding arise from purely algebraic constraints. A central result-"the No-Null-Space Leadership Condition"-characterizes leadership as a property of representational reachability rather than persuasion or authority. More broadly, the model explains how abstract beings can propagate, mutate, or disappear as they traverse diverse cognitive geometries. The account unifies insights from conceptual spaces, social epistemology, and AI value alignment by grounding meaning preservation in structural compatibility rather than shared information or rationality. I argue that this cognitive-geometric perspective clarifies the epistemic boundaries of influence in both human and artificial systems, and offers a general foundation for analyzing belief dynamics across heterogeneous agents.
Related papers
- Epistemology of Generative AI: The Geometry of Knowing [0.7252027234425333]
Generative AI presents an unprecedented challenge to our understanding of knowledge and its production.<n>This paper argues that the missing account must begin with a paradigmatic break that has not yet received adequate philosophical attention.
arXiv Detail & Related papers (2026-02-19T06:34:34Z) - Visual Language Hypothesis [14.062822951292402]
We study visual representation learning from a structural and topological perspective.<n>We show that approximating the quotient also places structural demands on the model architecture.
arXiv Detail & Related papers (2025-12-29T09:43:10Z) - A Geometric Unification of Concept Learning with Concept Cones [58.70836885177496]
Two traditions of interpretability have evolved side by side but seldom spoken to each other: Concept Bottleneck Models (CBMs) and Sparse Autoencoders (SAEs)<n>We show that both paradigms instantiate the same geometric structure.<n>CBMs provide human-defined reference geometries, while SAEs can be evaluated by how well their learned cones approximate or contain those of CBMs.
arXiv Detail & Related papers (2025-12-08T09:51:46Z) - Discovering Semantic Subdimensions through Disentangled Conceptual Representations [38.66662397064128]
This paper proposes a novel framework to investigate the subdimensions underlying coarse-grained semantic dimensions.<n>We introduce a Disentangled Continuous Representation Model (DCSRM) that decomposes word embeddings from large language models into multiple sub-embeddings.<n>Using these sub-embeddings, we identify a set of interpretable semantic subdimensions.<n>Our work offers more fine-grained interpretable semantic subdimensions of conceptual meaning.
arXiv Detail & Related papers (2025-08-29T09:04:34Z) - Toward a Graph-Theoretic Model of Belief: Confidence, Credibility, and Structural Coherence [0.0]
This paper introduces a minimal formalism for belief systems as directed, weighted graphs.<n>Unlike logical and argumentation-based frameworks, it supports fine-grained structural representation without committing to binary justification status or deductive closure.<n>Its aim is to provide a foundational substrate for analyzing the internal organization of belief systems.
arXiv Detail & Related papers (2025-08-05T14:03:23Z) - On the Fundamental Impossibility of Hallucination Control in Large Language Models [0.0]
Impossibility Theorem: no LLM performing non-trivial knowledge aggregation can simultaneously achieve truthful knowledge representation, semantic information conservation, and revelation of relevant knowledge.<n>We prove this by modeling inference as an auction of ideas, where distributed components compete to influence responses using encoded knowledge.<n>We show that hallucination and imagination are mathematically identical, and both violate at least one of the four essential properties.
arXiv Detail & Related papers (2025-06-04T23:28:39Z) - The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - Plasticity as the Mirror of Empowerment [78.05666168923442]
We ground this concept in a universal agent-centric measure that we refer to as plasticity.<n>Under this definition, we find that plasticity is well thought of as the mirror of empowerment.<n>Our main result establishes a tension between the plasticity and empowerment of an agent, suggesting that agent design needs to be mindful of both characteristics.
arXiv Detail & Related papers (2025-05-15T14:52:16Z) - Theoretical Foundations for Semantic Cognition in Artificial Intelligence [0.0]
monograph presents a modular cognitive architecture for artificial intelligence grounded in the formal modeling of belief as structured semantic state.<n> Belief states are defined as dynamic ensembles of linguistic expressions embedded within a navigable manifold, where operators enable assimilation, abstraction, nullification, memory, and introspection.
arXiv Detail & Related papers (2025-04-29T23:10:07Z) - Learning Visual-Semantic Subspace Representations [49.17165360280794]
We introduce a nuclear norm-based loss function, grounded in the same information theoretic principles that have proved effective in self-supervised learning.<n>We present a theoretical characterization of this loss, demonstrating that, in addition to promoting classity, it encodes the spectral geometry of the data within a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - A Geometric Notion of Causal Probing [85.49839090913515]
The linear subspace hypothesis states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.<n>We give a set of intrinsic criteria which characterize an ideal linear concept subspace.<n>We find that, for at least one concept across two languages models, the concept subspace can be used to manipulate the concept value of the generated word with precision.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Kernelized Concept Erasure [108.65038124096907]
We propose a kernelization of a linear minimax game for concept erasure.
It is possible to prevent specific non-linear adversaries from predicting the concept.
However, the protection does not transfer to different nonlinear adversaries.
arXiv Detail & Related papers (2022-01-28T15:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.