Implementing engrams from a machine learning perspective: matching for
prediction
- URL: http://arxiv.org/abs/2303.01253v1
- Date: Wed, 1 Mar 2023 10:05:40 GMT
- Title: Implementing engrams from a machine learning perspective: matching for
prediction
- Authors: Jesus Marco de Lucas
- Abstract summary: We propose how we might design a computer system to implement engrams using neural networks.
Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format.
We consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite evidence for the existence of engrams as memory support structures in
our brains, there is no consensus framework in neuroscience as to what their
physical implementation might be. Here we propose how we might design a
computer system to implement engrams using neural networks, with the main aim
of exploring new ideas using machine learning techniques, guided by challenges
in neuroscience. Building on autoencoders, we propose latent neural spaces as
indexes for storing and retrieving information in a compressed format. We
consider this technique as a first step towards predictive learning:
autoencoders are designed to compare reconstructed information with the
original information received, providing a kind of predictive ability, which is
an attractive evolutionary argument. We then consider how different states in
latent neural spaces corresponding to different types of sensory input could be
linked by synchronous activation, providing the basis for a sparse
implementation of memory using concept neurons. Finally, we list some of the
challenges and questions that link neuroscience and data science and that could
have implications for both fields, and conclude that a more interdisciplinary
approach is needed, as many scientists have already suggested.
Related papers
- Implementing engrams from a machine learning perspective: the relevance of a latent space [0.0]
In our previous work, we proposed that engrams in the brain could be biologically implemented as autoencoders over recurrent neural networks.
This brief note examines the relevance of the latent space in these autoencoders.
arXiv Detail & Related papers (2024-07-23T16:24:29Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) [9.14580723964253]
Can we obtain insights about the brain using AI models?
How is the information in deep learning models related to brain recordings?
Decoding models solve the inverse problem of reconstructing stimuli given the fMRI.
Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed.
arXiv Detail & Related papers (2023-07-17T06:54:36Z) - Redundancy and Concept Analysis for Code-trained Language Models [5.726842555987591]
Code-trained language models have proven to be highly effective for various code intelligence tasks.
They can be challenging to train and deploy for many software engineering applications due to computational bottlenecks and memory constraints.
We perform the first neuron-level analysis for source code models to identify textitimportant neurons within latent representations.
arXiv Detail & Related papers (2023-05-01T15:22:41Z) - BrainBERT: Self-supervised representation learning for intracranial
recordings [18.52962864519609]
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, with higher accuracy and with much less data.
In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.
arXiv Detail & Related papers (2023-02-28T07:40:37Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.