Layers, Folds, and Semi-Neuronal Information Processing
- URL: http://arxiv.org/abs/2208.06382v1
- Date: Thu, 7 Jul 2022 21:47:23 GMT
- Title: Layers, Folds, and Semi-Neuronal Information Processing
- Authors: Bradly Alicea, Jesse Parent
- Abstract summary: We use a type of embodied agent that exhibits layered representational capacity: meta-brain models.
We focus on two candidate structures that potentially explain this capacity: folding and layering.
The paper concludes with a discussion on how the meta-brains method can assist us in the investigation of enactivism, holism, and cognitive processing in the context of biological simulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What role does phenotypic complexity play in the systems-level function of an
embodied agent? The organismal phenotype is a topologically complex structure
that interacts with a genotype, developmental physics, and an informational
environment. Using this observation as inspiration, we utilize a type of
embodied agent that exhibits layered representational capacity: meta-brain
models. Meta-brains are used to demonstrate how phenotypes process information
and exhibit self-regulation from development to maturity. We focus on two
candidate structures that potentially explain this capacity: folding and
layering. As layering and folding can be observed in a host of biological
contexts, they form the basis for our representational investigations. First,
an innate starting point (genomic encoding) is described. The generative output
of this encoding is a differentiation tree, which results in a layered
phenotypic representation. Then we specify a formal meta-brain model of the
gut, which exhibits folding and layering in development in addition to
different degrees of representation of processed information. This organ
topology is retained in maturity, with the potential for additional folding and
representational drift in response to inflammation. Next, we consider
topological remapping using the developmental Braitenberg Vehicle (dBV) as a
toy model. During topological remapping, it is shown that folding of a layered
neural network can introduce a number of distortions to the original model,
some with functional implications. The paper concludes with a discussion on how
the meta-brains method can assist us in the investigation of enactivism,
holism, and cognitive processing in the context of biological simulation.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction [3.2274401541163322]
We propose a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens.
Our proposed model, SURVPATH, achieves state-of-the-art performance when evaluated against both unimodal and multimodal baselines.
arXiv Detail & Related papers (2023-04-13T21:02:32Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - How to build a cognitive map: insights from models of the hippocampal
formation [0.45880283710344055]
The concept of a cognitive map has emerged as one of the leading metaphors for these capacities.
unravelling the learning and neural representation of such a map has become a central focus of neuroscience.
arXiv Detail & Related papers (2022-02-03T16:49:37Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Meta-brain Models: biologically-inspired cognitive agents [0.0]
We propose a computational approach we call meta-brain models.
We will propose combinations of layers composed using specialized types of models.
We will conclude by proposing next steps in the development of this flexible and open-source approach.
arXiv Detail & Related papers (2021-08-31T05:20:53Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - Abstracting Deep Neural Networks into Concept Graphs for Concept Level
Interpretability [0.39635467316436124]
We attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn.
We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification.
arXiv Detail & Related papers (2020-08-14T16:34:32Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.