Learning Continuous Chaotic Attractors with a Reservoir Computer
- URL: http://arxiv.org/abs/2110.08631v1
- Date: Sat, 16 Oct 2021 18:07:27 GMT
- Title: Learning Continuous Chaotic Attractors with a Reservoir Computer
- Authors: Lindsay M. Smith (1), Jason Z. Kim (1), Zhixin Lu (1), Dani S. Bassett
(1 and 2) ((1) University of Pennsylvania, (2) Santa Fe Institute)
- Abstract summary: We train a 1000-neuron RNN to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories.
By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors, as quantified by an extra Lyapunov exponent equal to zero.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural systems are well known for their ability to learn and store
information as memories. Even more impressive is their ability to abstract
these memories to create complex internal representations, enabling advanced
functions such as the spatial manipulation of mental representations. While
recurrent neural networks (RNNs) are capable of representing complex
information, the exact mechanisms of how dynamical neural systems perform
abstraction are still not well-understood, thereby hindering the development of
more advanced functions. Here, we train a 1000-neuron RNN -- a reservoir
computer (RC) -- to abstract a continuous dynamical attractor memory from
isolated examples of dynamical attractor memories. Further, we explain the
abstraction mechanism with new theory. By training the RC on isolated and
shifted examples of either stable limit cycles or chaotic Lorenz attractors,
the RC learns a continuum of attractors, as quantified by an extra Lyapunov
exponent equal to zero. We propose a theoretical mechanism of this abstraction
by combining ideas from differentiable generalized synchronization and feedback
dynamics. Our results quantify abstraction in simple neural systems, enabling
us to design artificial RNNs for abstraction, and leading us towards a neural
basis of abstraction.
Related papers
- Transformer Dynamics: A neuroscientific approach to interpretability of large language models [0.0]
We focus on the residual stream (RS) in transformer models, conceptualizing it as a dynamical system evolving across layers.
We find that activations of individual RS units exhibit strong continuity across layers, despite the RS being a non-privileged basis.
In reduced-dimensional spaces, the RS follows a curved trajectory with attractor-like dynamics in the lower layers.
arXiv Detail & Related papers (2025-02-17T18:49:40Z) - Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Trainability, Expressivity and Interpretability in Gated Neural ODEs [0.0]
We introduce a novel measure of expressivity which probes the capacity of a neural network to generate complex trajectories.
We show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability.
We also demonstrate the benefit of gating in nODEs on several real-world tasks.
arXiv Detail & Related papers (2023-07-12T18:29:01Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example [14.91507266777207]
We show that a recurrent neural network can learn to modify its representation of complex information using only examples.
We provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations.
arXiv Detail & Related papers (2020-05-03T20:51:46Z) - Controlling Recurrent Neural Networks by Conceptors [0.5439020425818999]
I propose a mechanism of neurodynamical organization, called conceptors, which unites nonlinear dynamics with basic principles of conceptual abstraction and logic.
It becomes possible to learn, store, abstract, focus, morph, generalize, de-noise and recognize a large number of dynamical patterns within a single neural system.
arXiv Detail & Related papers (2014-03-13T18:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.