A Global Data-Driven Model for The Hippocampus and Nucleus Accumbens of Rat From The Local Field Potential Recordings (LFP)
- URL: http://arxiv.org/abs/2405.06732v1
- Date: Fri, 10 May 2024 15:58:39 GMT
- Title: A Global Data-Driven Model for The Hippocampus and Nucleus Accumbens of Rat From The Local Field Potential Recordings (LFP)
- Authors: Maedeh Sadeghi, Mahdi Aliyari Shoorehdeli, Shole jamali, Abbas Haghparast,
- Abstract summary: Local Field Potential (LFP) signals represent the dynamic flow of information in brain neural networks.
This paper identifies a global data-driven model to predict brain signals in different situations.
Morphine and natural rewards do not change the dynamic features of neurons in these regions.
- Score: 0.19999259391104385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In brain neural networks, Local Field Potential (LFP) signals represent the dynamic flow of information. Analyzing LFP clinical data plays a critical role in improving our understanding of brain mechanisms. One way to enhance our understanding of these mechanisms is to identify a global model to predict brain signals in different situations. This paper identifies a global data-driven based on LFP recordings of the Nucleus Accumbens and Hippocampus regions in freely moving rats. The LFP is recorded from each rat in two different situations: before and after the process of getting a reward which can be either a drug (Morphine) or natural food (like popcorn or biscuit). A comparison of five machine learning methods including Long Short Term Memory (LSTM), Echo State Network (ESN), Deep Echo State Network (DeepESN), Radial Basis Function (RBF), and Local Linear Model Tree (LLM) is conducted to develop this model. LoLiMoT was chosen with the best performance among all methods. This model can predict the future states of these regions with one pre-trained model. Identifying this model showed that Morphine and natural rewards do not change the dynamic features of neurons in these regions.
Related papers
- BrainMAE: A Region-aware Self-supervised Learning Framework for Brain Signals [11.030708270737964]
We propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data.
BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks.
arXiv Detail & Related papers (2024-06-24T19:16:24Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - BrainFormer: A Hybrid CNN-Transformer Model for Brain fMRI Data
Classification [31.83866719445596]
BrainFormer is a general hybrid Transformer architecture for brain disease classification with single fMRI volume.
BrainFormer is constructed by modeling the local cues within each voxel with 3D convolutions.
We evaluate BrainFormer on five independently acquired datasets including ABIDE, ADNI, MPILMBB, ADHD-200 and ECHO.
arXiv Detail & Related papers (2022-08-05T07:54:10Z) - Simple and complex spiking neurons: perspectives and analysis in a
simple STDP scenario [0.7829352305480283]
Spiking neural networks (SNNs) are inspired by biology and neuroscience to create fast and efficient learning systems.
This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities.
We make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system.
arXiv Detail & Related papers (2022-06-28T10:01:51Z) - Ranking of Communities in Multiplex Spatiotemporal Models of Brain
Dynamics [0.0]
We propose an interpretation of neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models (HMs)
This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques.
We produce a new tool for determining important communities of brain regions using a random walk-based procedure.
arXiv Detail & Related papers (2022-03-17T12:14:09Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.