Meta-brain Models: biologically-inspired cognitive agents
- URL: http://arxiv.org/abs/2109.11938v1
- Date: Tue, 31 Aug 2021 05:20:53 GMT
- Title: Meta-brain Models: biologically-inspired cognitive agents
- Authors: Bradly Alicea, Jesse Parent
- Abstract summary: We propose a computational approach we call meta-brain models.
We will propose combinations of layers composed using specialized types of models.
We will conclude by proposing next steps in the development of this flexible and open-source approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) systems based solely on neural networks or
symbolic computation present a representational complexity challenge. While
minimal representations can produce behavioral outputs like locomotion or
simple decision-making, more elaborate internal representations might offer a
richer variety of behaviors. We propose that these issues can be addressed with
a computational approach we call meta-brain models. Meta-brain models are
embodied hybrid models that include layered components featuring varying
degrees of representational complexity. We will propose combinations of layers
composed using specialized types of models. Rather than using a generic black
box approach to unify each component, this relationship mimics systems like the
neocortical-thalamic system relationship of the Mammalian brain, which utilizes
both feedforward and feedback connectivity to facilitate functional
communication. Importantly, the relationship between layers can be made
anatomically explicit. This allows for structural specificity that can be
incorporated into the model's function in interesting ways. We will propose
several types of layers that might be functionally integrated into agents that
perform unique types of tasks, from agents that simultaneously perform
morphogenesis and perception, to agents that undergo morphogenesis and the
acquisition of conceptual representations simultaneously. Our approach to
meta-brain models involves creating models with different degrees of
representational complexity, creating a layered meta-architecture that mimics
the structural and functional heterogeneity of biological brains, and an
input/output methodology flexible enough to accommodate cognitive functions,
social interactions, and adaptive behaviors more generally. We will conclude by
proposing next steps in the development of this flexible and open-source
approach.
Related papers
- Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Growing Brains: Co-emergence of Anatomical and Functional Modularity in
Recurrent Neural Networks [18.375521792153112]
Recurrent neural networks (RNNs) trained on compositional tasks can exhibit functional modularity.
We apply a recent machine learning method, brain-inspired modular training, to a network being trained to solve a set of compositional cognitive tasks.
We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected.
arXiv Detail & Related papers (2023-10-11T17:58:25Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Brain-inspired self-organization with cellular neuromorphic computing
for multimodal unsupervised learning [0.0]
We propose a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning.
We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system's topology is not fixed by the user but learned along the system's experience through self-organization.
arXiv Detail & Related papers (2020-04-11T21:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.