Hey Pentti, We Did (More of) It!: A Vector-Symbolic Lisp With Residue Arithmetic
- URL: http://arxiv.org/abs/2511.08767v1
- Date: Thu, 13 Nov 2025 01:06:35 GMT
- Title: Hey Pentti, We Did (More of) It!: A Vector-Symbolic Lisp With Residue Arithmetic
- Authors: Connor Hanley, Eilene Tomkins-Flanaganm, Mary Alexandria Kelly,
- Abstract summary: We extend a Vector-Symbolic Architecture (VSA) encoding of Lisp 1.5 with primitives for arithmetic operations using Residue Hyperdimensional Computing (RHC)<n>We discuss the potential applications of the VSA encoding in machine learning tasks, as well as the importance of encoding structured representations and designing neural networks whose behavior is sensitive to the structure of their representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using Frequency-domain Holographic Reduced Representations (FHRRs), we extend a Vector-Symbolic Architecture (VSA) encoding of Lisp 1.5 with primitives for arithmetic operations using Residue Hyperdimensional Computing (RHC). Encoding a Turing-complete syntax over a high-dimensional vector space increases the expressivity of neural network states, enabling network states to contain arbitrarily structured representations that are inherently interpretable. We discuss the potential applications of the VSA encoding in machine learning tasks, as well as the importance of encoding structured representations and designing neural networks whose behavior is sensitive to the structure of their representations in virtue of attaining more general intelligent agents than exist at present.
Related papers
- The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.<n>We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.<n>We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
We train and evaluate neural networks directly as binary classifiers of strings.<n>We provide results on a variety of languages across the Chomsky hierarchy for three neural architectures.<n>Our contributions will facilitate theoretically sound empirical testing of language recognition claims in future work.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - Hyperdimensional Computing with Spiking-Phasor Neurons [0.9594432031144714]
Symbolic Vector Architectures (VSAs) are a powerful framework for representing compositional reasoning.
We run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
arXiv Detail & Related papers (2023-02-28T20:09:12Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - Residual and Attentional Architectures for Vector-Symbols [0.0]
Vector-symbolic architectures (VSAs) provide methods for computing which are highly flexible and carry unique advantages.
In this work, we combine efficiency of the operations provided within the framework of the Fourier Holographic Reduced Representation (FHRR) VSA with the power of deep networks to construct novel VSA based residual and attention-based neural network architectures.
This demonstrates a novel application of VSAs and a potential path to implementing state-of-the-art neural models on neuromorphic hardware.
arXiv Detail & Related papers (2022-07-18T21:38:43Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - A Neuro-vector-symbolic Architecture for Solving Raven's Progressive
Matrices [15.686742809374024]
Neuro-vector-symbolic architecture (NVSA) is proposed to combine the best of deep neural networks and symbolic logical reasoning.
We show that NVSA achieves a new record of 97.7% average accuracy in RAVEN, and 98.8% in I-RAVEN datasets, with two orders of magnitude faster execution than the symbolic logical reasoning on CPUs.
arXiv Detail & Related papers (2022-03-09T08:29:21Z) - HyperSeed: Unsupervised Learning with Vector Symbolic Architectures [5.258404928739212]
This paper presents a novel unsupervised machine learning approach named Hyperseed.
It leverages Vector Symbolic Architectures (VSA) for fast learning a topology preserving feature map of unlabelled data.
The two distinctive novelties of the Hyperseed algorithm are 1) Learning from only few input data samples and 2) A learning rule based on a single vector operation.
arXiv Detail & Related papers (2021-10-15T20:05:43Z) - High-performance symbolic-numerics via multiple dispatch [52.77024349608834]
Symbolics.jl is an extendable symbolic system which uses dynamic multiple dispatch to change behavior depending on the domain needs.
We show that by formalizing a generic API on actions independent of implementation, we can retroactively add optimized data structures to our system.
We demonstrate the ability to swap between classical term-rewriting simplifiers and e-graph-based term-rewriting simplifiers.
arXiv Detail & Related papers (2021-05-09T14:22:43Z) - A Holistically-Guided Decoder for Deep Representation Learning with
Applications to Semantic Segmentation and Object Detection [74.88284082187462]
One common strategy is to adopt dilated convolutions in the backbone networks to extract high-resolution feature maps.
We propose one novel holistically-guided decoder which is introduced to obtain the high-resolution semantic-rich feature maps.
arXiv Detail & Related papers (2020-12-18T10:51:49Z) - Resonator networks for factoring distributed representations of data
structures [3.46969645559477]
We show how data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations.
Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion.
Re resonator networks open the possibility to apply VSAs to myriad artificial intelligence problems in real-world domains.
arXiv Detail & Related papers (2020-07-07T19:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.