Neuronal Sequence Models for Bayesian Online Inference
- URL: http://arxiv.org/abs/2004.00930v1
- Date: Thu, 2 Apr 2020 10:52:54 GMT
- Title: Neuronal Sequence Models for Bayesian Online Inference
- Authors: Sascha Fr\"olich, Dimitrije Markovi\'c, and Stefan J. Kiebel
- Abstract summary: Sequential neuronal activity underlies a wide range of processes in the brain.
Neuroscientific evidence for neuronal sequences has been reported in domains as diverse as perception, motor control, speech, spatial navigation and memory.
We review key findings about neuronal sequences and relate these to the concept of online inference on sequences as a model of sensory-motor processing and recognition.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Sequential neuronal activity underlies a wide range of processes in the
brain. Neuroscientific evidence for neuronal sequences has been reported in
domains as diverse as perception, motor control, speech, spatial navigation and
memory. Consequently, different dynamical principles have been proposed as
possible sequence-generating mechanisms. Combining experimental findings with
computational concepts like the Bayesian brain hypothesis and predictive coding
leads to the interesting possibility that predictive and inferential processes
in the brain are grounded on generative processes which maintain a sequential
structure. While probabilistic inference about ongoing sequences is a useful
computational model for both the analysis of neuroscientific data and a wide
range of problems in artificial recognition and motor control, research on the
subject is relatively scarce and distributed over different fields in the
neurosciences. Here we review key findings about neuronal sequences and relate
these to the concept of online inference on sequences as a model of
sensory-motor processing and recognition. We propose that describing sequential
neuronal activity as an expression of probabilistic inference over sequences
may lead to novel perspectives on brain function. Importantly, it is promising
to translate the key idea of probabilistic inference on sequences to machine
learning, in order to address challenges in the real-time recognition of speech
and human motion.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Computation with Sequences in a Model of the Brain [11.15191997898358]
How cognition arises from neural activity is a central open question in neuroscience.
We show that time can be captured naturally as precedence through synaptic weights and plasticity.
We show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences.
arXiv Detail & Related papers (2023-06-06T15:58:09Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.