Resonator networks for factoring distributed representations of data
structures
- URL: http://arxiv.org/abs/2007.03748v1
- Date: Tue, 7 Jul 2020 19:24:27 GMT
- Title: Resonator networks for factoring distributed representations of data
structures
- Authors: E. Paxon Frady, Spencer Kent, Bruno A. Olshausen, Friedrich T. Sommer
- Abstract summary: We show how data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations.
Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion.
Re resonator networks open the possibility to apply VSAs to myriad artificial intelligence problems in real-world domains.
- Score: 3.46969645559477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to encode and manipulate data structures with distributed neural
representations could qualitatively enhance the capabilities of traditional
neural networks by supporting rule-based symbolic reasoning, a central property
of cognition. Here we show how this may be accomplished within the framework of
Vector Symbolic Architectures (VSA) (Plate, 1991; Gayler, 1998; Kanerva, 1996),
whereby data structures are encoded by combining high-dimensional vectors with
operations that together form an algebra on the space of distributed
representations. In particular, we propose an efficient solution to a hard
combinatorial search problem that arises when decoding elements of a VSA data
structure: the factorization of products of multiple code vectors. Our proposed
algorithm, called a resonator network, is a new type of recurrent neural
network that interleaves VSA multiplication operations and pattern completion.
We show in two examples -- parsing of a tree-like data structure and parsing of
a visual scene -- how the factorization problem arises and how the resonator
network can solve it. More broadly, resonator networks open the possibility to
apply VSAs to myriad artificial intelligence problems in real-world domains. A
companion paper (Kent et al., 2020) presents a rigorous analysis and evaluation
of the performance of resonator networks, showing it out-performs alternative
approaches.
Related papers
- Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures [6.473177443214531]
We introduce a new variant of the resonator network, based on self-attention based update rules in iterative search problem.
Our algorithm enables a larger capacity for associative memory, enabling applications in many tasks like perception based pattern recognition, scene decomposition, and object reasoning.
arXiv Detail & Related papers (2024-03-20T00:37:19Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Neuromorphic Visual Scene Understanding with Resonator Networks [11.701553530610973]
We propose a neuromorphic solution exploiting three key concepts.
The framework is based on Vector Architectures with complex-valued vectors.
The network is factorized to factorize the non-commutative transforms translation and rotation in visual scenes.
A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics.
arXiv Detail & Related papers (2022-08-26T22:17:52Z) - Residual and Attentional Architectures for Vector-Symbols [0.0]
Vector-symbolic architectures (VSAs) provide methods for computing which are highly flexible and carry unique advantages.
In this work, we combine efficiency of the operations provided within the framework of the Fourier Holographic Reduced Representation (FHRR) VSA with the power of deep networks to construct novel VSA based residual and attention-based neural network architectures.
This demonstrates a novel application of VSAs and a potential path to implementing state-of-the-art neural models on neuromorphic hardware.
arXiv Detail & Related papers (2022-07-18T21:38:43Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Modeling Structure with Undirected Neural Networks [20.506232306308977]
We propose undirected neural networks, a flexible framework for specifying computations that can be performed in any order.
We demonstrate the effectiveness of undirected neural architectures, both unstructured and structured, on a range of tasks.
arXiv Detail & Related papers (2022-02-08T10:06:51Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Investigating the Compositional Structure Of Deep Neural Networks [1.8899300124593645]
We introduce a novel theoretical framework based on the compositional structure of piecewise linear activation functions.
It is possible to characterize the instances of the input data with respect to both the predicted label and the specific (linear) transformation used to perform predictions.
Preliminary tests on the MNIST dataset show that our method can group input instances with regard to their similarity in the internal representation of the neural network.
arXiv Detail & Related papers (2020-02-17T14:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.