A novel HD Computing Algebra: Non-associative superposition of states
creating sparse bundles representing order information
- URL: http://arxiv.org/abs/2202.08633v1
- Date: Thu, 17 Feb 2022 12:40:32 GMT
- Title: A novel HD Computing Algebra: Non-associative superposition of states
creating sparse bundles representing order information
- Authors: Stefan Reimann
- Abstract summary: Cognitive computing requires to represent item information as well as sequential information.
A simple binary bundling rule inspired by the summation of neuronal activities allows the resulting memory state to represent both, item information as well as sequential information.
The memory state resulting from bundling together an arbitrary number of items is non-homogeneous and has a degree of sparseness, which is controlled by the activation threshold in summation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information inflow into a computational system is by a sequence of
information items. Cognitive computing, i.e. performing transformations along
that sequence, requires to represent item information as well as sequential
information. Among the most elementary operations is bundling, i.e. adding
items, leading to 'memory states', i.e. bundles, from which information can be
retrieved. If the bundling operation used is associative, e.g. ordinary
vector-addition, sequential information can not be represented without imposing
additional algebraic structure. A simple stochastic binary bundling rule
inspired by the stochastic summation of neuronal activities allows the
resulting memory state to represent both, item information as well as
sequential information as long as it is non-associative. The memory state
resulting from bundling together an arbitrary number of items is
non-homogeneous and has a degree of sparseness, which is controlled by the
activation threshold in summation. The bundling operation proposed allows to
build a filter in the temporal as well as in the items' domain, which can be
used to navigate the continuous inflow of information.
Related papers
- 'Memory States' from Almost Nothing: Representing and Computing in a Non-associative Algebra [0.0]
This note presents a non-associative framework for the representation and computation of information items in high-dimensional space.<n>It is consistent with the principles of spatial computing and with the empirical findings in cognitive science about memory.
arXiv Detail & Related papers (2025-05-13T08:43:02Z) - Information Subtraction: Learning Representations for Conditional Entropy [1.4297089600426414]
This paper introduces Information Subtraction, a framework designed to generate representations that preserve desired information while eliminating the undesired.
We implement a generative-based architecture that outputs these representations by simultaneously maximizing an information term and minimizing another.
Our results highlight the representations' ability to provide semantic features of conditional entropy.
arXiv Detail & Related papers (2025-01-02T13:10:31Z) - Associative Knowledge Graphs for Efficient Sequence Storage and Retrieval [3.355436702348694]
We create associative knowledge graphs that are highly effective for storing and recognizing sequences.
Individual objects (represented as nodes) can be a part of multiple sequences or appear repeatedly within a single sequence.
This approach has potential applications in diverse fields, such as anomaly detection in financial transactions or predicting user behavior based on past actions.
arXiv Detail & Related papers (2024-11-19T13:00:31Z) - Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences [51.965994405124455]
Humans excel at learning abstract patterns across different sequences.<n>Many sequence learning models lack the ability to abstract, which leads to memory inefficiency and poor transfer.<n>We introduce a non-parametric hierarchical variable learning model (HVM) that learns chunks from sequences and abstracts contextually similar chunks as variables.
arXiv Detail & Related papers (2024-10-27T18:13:07Z) - Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures [6.473177443214531]
We introduce a new variant of the resonator network, based on self-attention based update rules in iterative search problem.
Our algorithm enables a larger capacity for associative memory, enabling applications in many tasks like perception based pattern recognition, scene decomposition, and object reasoning.
arXiv Detail & Related papers (2024-03-20T00:37:19Z) - Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Entropic Associative Memory for Manuscript Symbols [0.0]
Manuscript symbols can be stored, recognized and retrieved from an entropic digital memory that is associative and distributed but yet declarative.
We discuss the operational characteristics of the entropic associative memory for retrieving objects with both complete and incomplete information.
arXiv Detail & Related papers (2022-02-17T02:29:33Z) - Human Activity Recognition using Attribute-Based Neural Networks and
Context Information [61.67246055629366]
We consider human activity recognition (HAR) from wearable sensor data in manual-work processes.
We show how context information can be integrated systematically into a deep neural network-based HAR system.
We empirically show that our proposed architecture increases HAR performance, compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T06:08:25Z) - Quantum Arithmetic for Directly Embedded Arrays [1.8472148461613158]
We describe a general-purpose framework to design quantum algorithms relying upon an efficient handling of arrays.
The corner-stone of the framework is the direct embedding of information into quantum amplitudes.
We give explicit examples regarding the manipulation of generic oracles.
arXiv Detail & Related papers (2021-07-29T10:14:17Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - New advances in enumerative biclustering algorithms with online
partitioning [80.22629846165306]
This paper further extends RIn-Close_CVC, a biclustering algorithm capable of performing an efficient, complete, correct and non-redundant enumeration of maximal biclusters with constant values on columns in numerical datasets.
The improved algorithm is called RIn-Close_CVC3, keeps those attractive properties of RIn-Close_CVC, and is characterized by: a drastic reduction in memory usage; a consistent gain in runtime.
arXiv Detail & Related papers (2020-03-07T14:54:26Z) - Assignment Flows for Data Labeling on Graphs: Convergence and Stability [69.68068088508505]
This paper establishes conditions on the weight parameters that guarantee convergence of the continuous-time assignment flow to integral assignments (labelings)
Several counter-examples illustrate that violating the conditions may entail unfavorable behavior of the assignment flow regarding contextual data classification.
arXiv Detail & Related papers (2020-02-26T15:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.