Implementing engrams from a machine learning perspective: the relevance of a latent space
- URL: http://arxiv.org/abs/2407.16616v1
- Date: Tue, 23 Jul 2024 16:24:29 GMT
- Title: Implementing engrams from a machine learning perspective: the relevance of a latent space
- Authors: J Marco de Lucas,
- Abstract summary: In our previous work, we proposed that engrams in the brain could be biologically implemented as autoencoders over recurrent neural networks.
This brief note examines the relevance of the latent space in these autoencoders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In our previous work, we proposed that engrams in the brain could be biologically implemented as autoencoders over recurrent neural networks. These autoencoders would comprise basic excitatory/inhibitory motifs, with credit assignment deriving from a simple homeostatic criterion. This brief note examines the relevance of the latent space in these autoencoders. We consider the relationship between the dimensionality of these autoencoders and the complexity of the information being encoded. We discuss how observed differences between species in their connectome could be linked to their cognitive capacities. Finally, we link this analysis with a basic but often overlooked fact: human cognition is likely limited by our own brain structure. However, this limitation does not apply to machine learning systems, and we should be aware of the need to learn how to exploit this augmented vision of the nature.
Related papers
- Implementing engrams from a machine learning perspective: XOR as a basic motif [0.0]
We present our initial ideas based on a basic motif that implements an XOR switch.
We explore how to build a basic biological neuronal structure with learning capacity integrating this XOR motif.
arXiv Detail & Related papers (2024-06-14T11:36:49Z) - Toward Neuromic Computing: Neurons as Autoencoders [0.0]
This paper presents the idea that neural backpropagation is using dendritic processing to enable individual neurons to perform autoencoding.
Using a very simple connection weight search and artificial neural network model, the effects of interleaving autoencoding for each neuron in a hidden layer of a feedforward network are explored.
arXiv Detail & Related papers (2024-03-04T18:58:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neural-Logic Human-Object Interaction Detection [67.4993347702353]
We present L OGIC HOI, a new HOI detector that leverages neural-logic reasoning and Transformer to infer feasible interactions between entities.
Specifically, we modify the self-attention mechanism in vanilla Transformer, enabling it to reason over the human, action, object> triplet and constitute novel interactions.
We formulate these two properties in first-order logic and ground them into continuous space to constrain the learning process of our approach, leading to improved performance and zero-shot generalization capabilities.
arXiv Detail & Related papers (2023-11-16T11:47:53Z) - Implementing engrams from a machine learning perspective: matching for
prediction [0.0]
We propose how we might design a computer system to implement engrams using neural networks.
Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format.
We consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation.
arXiv Detail & Related papers (2023-03-01T10:05:40Z) - Competitive learning to generate sparse representations for associative
memory [0.0]
We propose a biologically plausible network that encodes images into codes that are suitable for associative memory.
It is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme.
arXiv Detail & Related papers (2023-01-05T17:57:52Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Visualizing and Understanding Vision System [0.6510507449705342]
We use a vision recognition-reconstruction network (RRN) to investigate the development, recognition, learning and forgetting mechanisms.
In digit recognition study, we witness that the RRN could maintain object invariance representation under various viewing conditions.
In the learning and forgetting study, novel structure recognition is implemented by adjusting entire synapses in low magnitude while pattern specificities of original synaptic connectivity are preserved.
arXiv Detail & Related papers (2020-06-11T07:08:49Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.