Representational Systems Theory: A Unified Approach to Encoding,
Analysing and Transforming Representations
- URL: http://arxiv.org/abs/2206.03172v1
- Date: Tue, 7 Jun 2022 10:43:27 GMT
- Title: Representational Systems Theory: A Unified Approach to Encoding,
Analysing and Transforming Representations
- Authors: Daniel Raggi, Gem Stapleton, Mateja Jamnik, Aaron Stockdill, Grecia
Garcia Garcia, Peter C-H. Cheng
- Abstract summary: Representational Systems Theory is designed to encode a wide variety of representations from three core perspectives.
It becomes possible to structurally transform representations in one system into representations in another.
- Score: 3.1252164619375473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of representations is of fundamental importance to any form of
communication, and our ability to exploit them effectively is paramount. This
article presents a novel theory -- Representational Systems Theory -- that is
designed to abstractly encode a wide variety of representations from three core
perspectives: syntax, entailment, and their properties. By introducing the
concept of a construction space, we are able to encode each of these core
components under a single, unifying paradigm. Using our Representational
Systems Theory, it becomes possible to structurally transform representations
in one system into representations in another. An intrinsic facet of our
structural transformation technique is representation selection based on
properties that representations possess, such as their relative cognitive
effectiveness or structural complexity. A major theoretical barrier to
providing general structural transformation techniques is a lack of terminating
algorithms. Representational Systems Theory permits the derivation of partial
transformations when no terminating algorithm can produce a full
transformation. Since Representational Systems Theory provides a universal
approach to encoding representational systems, a further key barrier is
eliminated: the need to devise system-specific structural transformation
algorithms, that are necessary when different systems adopt different
formalisation approaches. Consequently, Representational Systems Theory is the
first general framework that provides a unified approach to encoding
representations, supports representation selection via structural
transformations, and has the potential for widespread practical application.
Related papers
- Communication-Inspired Tokenization for Structured Image Representations [74.17163003465537]
COMmunication inspired Tokenization (COMiT) is a framework for learning structured discrete visual token sequences.<n>Our experiments demonstrate that while semantic alignment provides grounding, attentive sequential tokenization is critical for inducing interpretable, object-centric token structure.
arXiv Detail & Related papers (2026-02-24T09:53:50Z) - Deep Unfolding: Recent Developments, Theory, and Design Guidelines [99.63555420898554]
This article provides a tutorial-style overview of deep unfolding, a framework that transforms optimization algorithms into structured, trainable ML architectures.<n>We review the foundations of optimization for inference and for learning, introduce four representative design paradigms for deep unfolding, and discuss the distinctive training schemes that arise from their iterative nature.
arXiv Detail & Related papers (2025-12-03T13:16:35Z) - Unlocking Out-of-Distribution Generalization in Transformers via Recursive Latent Space Reasoning [50.99796659680724]
This work investigates out-of-distribution (OOD) generalization in Transformer networks using a GSM8K-style modular arithmetic on computational graphs task as a testbed.<n>We introduce and explore a set of four architectural mechanisms aimed at enhancing OOD generalization.<n>We complement these empirical results with a detailed mechanistic interpretability analysis that reveals how these mechanisms give rise to robust OOD generalization abilities.
arXiv Detail & Related papers (2025-10-15T21:03:59Z) - A Mathematical Explanation of Transformers for Large Language Models and GPTs [6.245431127481903]
We propose a novel continuous framework that interprets the Transformer as a discretization of a structured integro-differential equation.<n>Within this formulation, the self-attention mechanism emerges naturally as a non-local integral operator.<n>Our approach extends beyond previous theoretical analyses by embedding the entire Transformer operation in continuous domains.
arXiv Detail & Related papers (2025-10-05T01:16:08Z) - State Algebra for Propositional Logic [0.0]
State Algebra is a framework to represent and manipulate propositional logic.<n>It is structured as a hierarchy of three representations: Set, Coordinate, and Row Decomposition.<n>We show that although the default reduction of a state vector is not canonical, a unique canonical form can be obtained by applying a fixed variable order.
arXiv Detail & Related papers (2025-09-12T15:05:52Z) - Structure Transfer: an Inference-Based Calculus for the Transformation of Representations [20.889971883203113]
A major unsolved problem is how to devise representational-system techniques that drive representation transformation and choice.<n>We present a novel calculus, called structure transfer, that enables representation transformation across diverse RSs.
arXiv Detail & Related papers (2025-09-03T12:07:23Z) - A Free Probabilistic Framework for Analyzing the Transformer-based Language Models [19.78896931593813]
We present a formal operator-theoretic framework for analyzing Transformer-based language models using free probability theory.<n>This work offers a principled, though theoretical, perspective on structural dynamics in large language models.
arXiv Detail & Related papers (2025-06-19T19:13:02Z) - Sparsification and Reconstruction from the Perspective of Representation Geometry [10.834177456685538]
Sparse Autoencoders (SAEs) have emerged as a predominant tool in mechanistic interpretability.<n>This study explains the principles of sparsity from the perspective of representational geometry.<n>Specifically emphasizes the necessity of understanding representations and incorporating representational constraints.
arXiv Detail & Related papers (2025-05-28T15:54:33Z) - The Coverage Principle: A Framework for Understanding Compositional Generalization [31.762330857169914]
We show that models relying primarily on pattern matching for compositional tasks cannot reliably generalize beyond substituting fragments that yield identical results when used in the same contexts.<n>We demonstrate that this framework has a strong predictive power for the generalization capabilities of Transformers.
arXiv Detail & Related papers (2025-05-26T17:55:15Z) - Directional Non-Commutative Monoidal Structures for Compositional Embeddings in Machine Learning [0.0]
We introduce a new structure for compositional embeddings built on directional non-commutative monoidal operators.<n>Our construction defines a distinct composition operator circ_i for each axis i, ensuring associative combination along each axis without imposing global commutativity.<n>All axis-specific operators commute with one another, enforcing a global interchange law that enables consistent crossaxis compositions.
arXiv Detail & Related papers (2025-05-21T13:27:14Z) - Continuous Representation Methods, Theories, and Applications: An Overview and Perspectives [55.22101595974193]
Recently, continuous representation methods emerge as novel paradigms that characterize the intrinsic structures of real-world data.<n>This review focuses on three aspects: (i) Continuous representation method designs such as basis function representation, statistical modeling, tensor function decomposition, and implicit neural representation; (ii) Theoretical foundations of continuous representations such as approximation error analysis, convergence property, and implicit regularization; and (iii) Real-world applications of continuous representations derived from computer vision, graphics, bioinformatics, and remote sensing.
arXiv Detail & Related papers (2025-05-21T07:50:19Z) - Emergence of Abstractions: Concept Encoding and Decoding Mechanism for In-Context Learning in Transformers [18.077009146950473]
Autoregressive transformers exhibit adaptive learning through in-context learning (ICL)
We propose concept encoding-decoding mechanism to explain ICL by studying how transformers form and use internal abstractions in their representations.
Our empirical insights shed light into better understanding the success and failure modes of large language models via their representations.
arXiv Detail & Related papers (2024-12-16T19:00:18Z) - A Canonicalization Perspective on Invariant and Equivariant Learning [54.44572887716977]
We introduce a canonicalization perspective that provides an essential and complete view of the design of frames.
We show that there exists an inherent connection between frames and canonical forms.
We design novel frames for eigenvectors that are strictly superior to existing methods.
arXiv Detail & Related papers (2024-05-28T17:22:15Z) - Learning Visual-Semantic Subspace Representations for Propositional Reasoning [49.17165360280794]
We propose a novel approach for learning visual representations that conform to a specified semantic structure.
Our approach is based on a new nuclear norm-based loss.
We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - Task Agnostic Architecture for Algorithm Induction via Implicit Composition [10.627575117586417]
This position paper aims to explore developing such a unified architecture and proposes a theoretical framework of how it could be constructed.
Recent Generative AI, especially Transformer-based models, demonstrate potential as an architecture capable of constructing algorithms for a wide range of domains.
Our exploration delves into current capabilities and limitations of Transformer-based and other methods in efficient and correct algorithm composition.
arXiv Detail & Related papers (2024-04-03T04:31:09Z) - A Neural Rewriting System to Solve Algorithmic Problems [47.129504708849446]
We propose a modular architecture designed to learn a general procedure for solving nested mathematical formulas.
Inspired by rewriting systems, a classic framework in symbolic artificial intelligence, we include in the architecture three specialized and interacting modules.
We benchmark our system against the Neural Data Router, a recent model specialized for systematic generalization, and a state-of-the-art large language model (GPT-4) probed with advanced prompting strategies.
arXiv Detail & Related papers (2024-02-27T10:57:07Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability [30.76910454663951]
Causal abstraction provides a theoretical foundation for mechanistic interpretability.
Our contributions are generalizing the theory of causal abstraction from mechanism replacement to arbitrary mechanism transformation.
arXiv Detail & Related papers (2023-01-11T20:42:41Z) - Expressiveness and machine processability of Knowledge Organization
Systems (KOS): An analysis of concepts and relations [0.0]
The potential of both the expressiveness and machine processability of each Knowledge Organization System is extensively regulated by its structural rules.
Ontologies explicitly define diverse types of relations, and are by their nature machine-processable.
arXiv Detail & Related papers (2020-03-11T12:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.