A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems
- URL: http://arxiv.org/abs/2409.13669v1
- Date: Fri, 20 Sep 2024 17:25:37 GMT
- Title: A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems
- Authors: T. Anderson Keller, Lyle Muller, Terrence J. Sejnowski, Max Welling,
- Abstract summary: We introduce a new'spacetime' perspective on neural computation.
We show that temporal dynamics may be by which natural neural systems encode approximate visual quantities.
- Score: 43.02233537621737
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is now substantial evidence for traveling waves and other structured spatiotemporal recurrent neural dynamics in cortical structures; but these observations have typically been difficult to reconcile with notions of topographically organized selectivity and feedforward receptive fields. We introduce a new 'spacetime' perspective on neural computation in which structured selectivity and dynamics are not contradictory but instead are complimentary. We show that spatiotemporal dynamics may be a mechanism by which natural neural systems encode approximate visual, temporal, and abstract symmetries of the world as conserved quantities, thereby enabling improved generalization and long-term working memory.
Related papers
- Conservation-informed Graph Learning for Spatiotemporal Dynamics Prediction [84.26340606752763]
In this paper, we introduce the conservation-informed GNN (CiGNN), an end-to-end explainable learning framework.
The network is designed to conform to the general symmetry conservation law via symmetry where conservative and non-conservative information passes over a multiscale space by a latent temporal marching strategy.
Results demonstrate that CiGNN exhibits remarkable baseline accuracy and generalizability, and is readily applicable to learning for prediction of varioustemporal dynamics.
arXiv Detail & Related papers (2024-12-30T13:55:59Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Learning Spatiotemporal Dynamical Systems from Point Process Observations [7.381752536547389]
Current neural network-based model approaches fall short when faced with data that is collected randomly over time and space.
In response, we developed a new method that can effectively learn from such process observations.
Our model integrates techniques from neural differential equations, neural point processes, implicit neural representations and amortized variational inference.
arXiv Detail & Related papers (2024-06-01T09:03:32Z) - Covariant spatio-temporal receptive fields for neuromorphic computing [1.9365675487641305]
This work combines efforts within scale theory and computational neuroscience to identify theoretically well-founded ways to process-temporal signals in neuromorphic systems.
Our contributions are immediately relevant for signal processing and event-based vision, and can be extended to other processing tasks over space and time, such as memory and control.
arXiv Detail & Related papers (2024-05-01T04:51:10Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Backpropagation through space, time, and the brain [2.10686639478348]
We introduce General Latent Equilibrium, a computational framework for fully local-temporal credit assignment in physical, dynamical networks of neurons.
In particular, GLE exploits the morphology of dendritic trees to enable more complex information storage and processing in single neurons.
arXiv Detail & Related papers (2024-03-25T16:57:02Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Neural Spatio-Temporal Point Processes [31.474420819149724]
We propose a new class of parameterizations for point-trivial processes which leverage Neural ODEs as a computational method.
We validate our models on data sets from a wide variety of contexts such as seismology, epidemiology, urban mobility, and neuroscience.
arXiv Detail & Related papers (2020-11-09T17:28:23Z) - Nonseparable Symplectic Neural Networks [23.77058934710737]
We propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs)
NSSNNs uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data.
We show the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems.
arXiv Detail & Related papers (2020-10-23T19:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.