Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware
- URL: http://arxiv.org/abs/2405.01305v2
- Date: Tue, 16 Jul 2024 09:41:27 GMT
- Title: Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware
- Authors: Madison Cotteret, Hugh Greatorex, Alpha Renner, Junren Chen, Emre Neftci, Huaqiang Wu, Giacomo Indiveri, Martin Ziegler, Elisabetta Chicca,
- Abstract summary: We describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs.
We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix.
This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware.
- Score: 3.961418890143814
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly non-ideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.
Related papers
- Learning local equivariant representations for quantum operators [7.747597014044332]
We introduce a novel deep learning model, SLEM, for predicting multiple quantum operators.
SLEM achieves state-of-the-art accuracy while dramatically improving computational efficiency.
We demonstrate SLEM's capabilities across diverse 2D and 3D materials, achieving high accuracy even with limited training data.
arXiv Detail & Related papers (2024-07-08T15:55:12Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning [1.0356366043809717]
We propose a neural network approach to symbolic regression in a novel framework that allows dynamic pruning of model weights, input features, and mathematical operators in a single training process.
Our approach enables symbolic regression to achieve fast inference with nanosecond-scale latency on FPGAs for high-dimensional datasets in environments with stringent computational resource constraints.
arXiv Detail & Related papers (2024-01-18T12:51:38Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - REMuS-GNN: A Rotation-Equivariant Model for Simulating Continuum
Dynamics [0.0]
We introduce REMuS-GNN, a rotation-equivariant multi-scale model for simulating continuum dynamical systems.
We demonstrate and evaluate this method on the incompressible flow around elliptical cylinders.
arXiv Detail & Related papers (2022-05-05T16:20:37Z) - Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm
of spatio-temporal data [4.996878640124385]
We propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatialtemporal data.
NIF consists of two modified multilayer perceptrons (i) ShapeNet, which isolates and represents the spatial complexity (i) ShapeNet, which accounts for any other input measurements, including parametric dependencies, time, and sensor measurements.
We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatial-temporal dynamics, efficient many-spatial-temporal generalization, and improved performance for sparse
arXiv Detail & Related papers (2022-04-07T05:02:58Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Theory of gating in recurrent neural networks [5.672132510411465]
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience.
Here, we show that gating offers flexible control of two salient features of the collective dynamics.
The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator.
arXiv Detail & Related papers (2020-07-29T13:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.