New Ideas for Brain Modelling 6
- URL: http://arxiv.org/abs/2005.05137v1
- Date: Mon, 11 May 2020 14:28:34 GMT
- Title: New Ideas for Brain Modelling 6
- Authors: Kieran Greer
- Abstract summary: This paper describes implementation details for a 3-level cognitive model.
The whole architecture is now modular, with different levels using different types of information.
The top-level cognitive layer has been re-designed to model the Cognitive Process Language (CPL) of an earlier paper.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes implementation details for a 3-level cognitive model,
described in the paper series. The whole architecture is now modular, with
different levels using different types of information. The ensemble-hierarchy
relationship is maintained and placed in the bottom optimising and middle
aggregating levels, to store memory objects and their relations. The top-level
cognitive layer has been re-designed to model the Cognitive Process Language
(CPL) of an earlier paper, by refactoring it into a network structure with a
light scheduler. The cortex brain region is thought to be hierarchical -
clustering from simple to more complex features. The refactored network might
therefore challenge conventional thinking on that brain region. It is also
argued that the function and structure in particular, of the new top level, is
similar to the psychology theory of chunking. The model is still only a
framework and does not have enough information for real intelligence. But a
framework is now implemented over the whole design and so can give a more
complete picture about the potential for results.
Related papers
- From Manifestations to Cognitive Architectures: a Scalable Framework [2.6563873893593826]
We propose a novel way to interpret reality as an information source, that is later translated into a computational framework.
This framework is able to build elements of classical cognitive architectures, like Long Term Memory and Working Memory.
arXiv Detail & Related papers (2024-06-14T08:26:26Z) - Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to Model Hierarchical Brain Networks [0.12075823996747355]
Brain networks display a hierarchical organization, a complexity that poses a challenge for existing deep learning models.
We propose a symbolic autoencoder informed by weak supervision and an Emergent Language (EL) framework.
Our innovation includes a generalized hierarchical loss function designed to ensure that both sentences and images accurately reflect the hierarchical structure of functional brain networks.
arXiv Detail & Related papers (2024-04-15T13:51:05Z) - Classification and Reconstruction Processes in Deep Predictive Coding
Networks: Antagonists or Allies? [0.0]
Predictive coding-inspired deep networks for visual computing integrate classification and reconstruction processes in shared intermediate layers.
We take a critical look at how classifying and reconstructing interact in deep learning architectures.
Our findings underscore a significant challenge: Classification-driven information diminishes reconstruction-driven information in intermediate layers' shared representations.
arXiv Detail & Related papers (2024-01-17T14:34:32Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Hierarchical Variational Memory for Few-shot Learning Across Domains [120.87679627651153]
We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory.
The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand.
We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model.
arXiv Detail & Related papers (2021-12-15T15:01:29Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - New Ideas for Brain Modelling 7 [0.0]
This paper updates the cognitive model by creating two systems and unifying them over the same structure.
It represents information at the semantic level only, where labelled patterns are aggregated into a 'type-set-match' form.
arXiv Detail & Related papers (2020-11-04T10:59:01Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.