A Topological Deep Learning Framework for Neural Spike Decoding
- URL: http://arxiv.org/abs/2212.05037v2
- Date: Wed, 6 Sep 2023 15:03:54 GMT
- Title: A Topological Deep Learning Framework for Neural Spike Decoding
- Authors: Edward C. Mitchell, Brittany Story, David Boothe, Piotr J.
Franaszczuk, Vasileios Maroulas
- Abstract summary: Two of the ways brains encode spatial information is through head direction cells and grid cells.
We develop a topological deep learning framework for neural spike train decoding.
- Score: 1.0062127381149395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain's spatial orientation system uses different neuron ensembles to aid
in environment-based navigation. Two of the ways brains encode spatial
information is through head direction cells and grid cells. Brains use head
direction cells to determine orientation whereas grid cells consist of layers
of decked neurons that overlay to provide environment-based navigation. These
neurons fire in ensembles where several neurons fire at once to activate a
single head direction or grid. We want to capture this firing structure and use
it to decode head direction grid cell data. Understanding, representing, and
decoding these neural structures requires models that encompass higher order
connectivity, more than the 1-dimensional connectivity that traditional
graph-based models provide. To that end, in this work, we develop a topological
deep learning framework for neural spike train decoding. Our framework combines
unsupervised simplicial complex discovery with the power of deep learning via a
new architecture we develop herein called a simplicial convolutional recurrent
neural network. Simplicial complexes, topological spaces that use not only
vertices and edges but also higher-dimensional objects, naturally generalize
graphs and capture more than just pairwise relationships. Additionally, this
approach does not require prior knowledge of the neural activity beyond spike
counts, which removes the need for similarity measurements. The effectiveness
and versatility of the simplicial convolutional neural network is demonstrated
on head direction and trajectory prediction via head direction and grid cell
datasets.
Related papers
- Characterizing Learning in Spiking Neural Networks with Astrocyte-Like Units [0.0]
We introduce a modified spiking neural network model with added astrocyte-like units in a neural network.
We show that the combination of neurons and astrocytes together, as opposed to neural- and astrocyte-only networks, are critical for driving learning.
arXiv Detail & Related papers (2025-03-09T22:36:58Z) - Integrating Causality with Neurochaos Learning: Proposed Approach and Research Agenda [1.534667887016089]
We investigate how causal and neurochaos learning approaches can be integrated together to produce better results.
We propose an approach for this integration to enhance classification, prediction and reinforcement learning.
arXiv Detail & Related papers (2025-01-23T15:45:29Z) - Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems [3.604033202771937]
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
Many neurons exhibit $textitmixed selectivity$, i.e., they represent multiple unrelated features.
We propose an automated method for quantifying visual interpretability and an approach for finding meaningful directions in network activation space.
arXiv Detail & Related papers (2023-10-17T17:41:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - A Spiking Neural Network based on Neural Manifold for Augmenting
Intracortical Brain-Computer Interface Data [5.039813366558306]
Brain-computer interfaces (BCIs) transform neural signals in the brain into in-structions to control external devices.
With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before.
Here, we use spiking neural networks (SNN) as data generators.
arXiv Detail & Related papers (2022-03-26T15:32:31Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Convolutional Neural Networks for cytoarchitectonic brain mapping at
large scale [0.33727511459109777]
We present a new workflow for mapping cytoarchitectonic areas in large series of cell-body stained histological sections of human postmortem brains.
It is based on a Deep Convolutional Neural Network (CNN), which is trained on a pair of section images with annotations, with a large number of un-annotated sections in between.
The new workflow does not require preceding 3D-reconstruction of sections, and is robust against histological artefacts.
arXiv Detail & Related papers (2020-11-25T16:25:13Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.