Cognitive modelling with multilayer networks: Insights, advancements and
future challenges
- URL: http://arxiv.org/abs/2210.00500v1
- Date: Sun, 2 Oct 2022 12:22:53 GMT
- Title: Cognitive modelling with multilayer networks: Insights, advancements and
future challenges
- Authors: Massimo Stella, Salvatore Citraro, Giulio Rossetti, Daniele Marinazzo,
Yoed N. Kenett and Michael S. Vitevitch
- Abstract summary: The mental lexicon is a cognitive system representing information about the words/concepts that one knows.
How can semantic, phonological, syntactic, and other types of conceptual associations be mapped within a coherent mathematical framework to study how the mental lexicon works?
Cognitive multilayer networks can map multiple types of information at once, thus capturing how different layers of associations might co-exist within the mental lexicon.
- Score: 0.6067748036747219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The mental lexicon is a complex cognitive system representing information
about the words/concepts that one knows. Decades of psychological experiments
have shown that conceptual associations across multiple, interactive cognitive
levels can greatly influence word acquisition, storage, and processing. How can
semantic, phonological, syntactic, and other types of conceptual associations
be mapped within a coherent mathematical framework to study how the mental
lexicon works? We here review cognitive multilayer networks as a promising
quantitative and interpretative framework for investigating the mental lexicon.
Cognitive multilayer networks can map multiple types of information at once,
thus capturing how different layers of associations might co-exist within the
mental lexicon and influence cognitive processing. This review starts with a
gentle introduction to the structure and formalism of multilayer networks. We
then discuss quantitative mechanisms of psychological phenomena that could not
be observed in single-layer networks and were only unveiled by combining
multiple layers of the lexicon: (i) multiplex viability highlights language
kernels and facilitative effects of knowledge processing in healthy and
clinical populations; (ii) multilayer community detection enables contextual
meaning reconstruction depending on psycholinguistic features; (iii) layer
analysis can mediate latent interactions of mediation, suppression and
facilitation for lexical access. By outlining novel quantitative perspectives
where multilayer networks can shed light on cognitive knowledge
representations, also in next-generation brain/mind models, we discuss key
limitations and promising directions for cutting-edge future research.
Related papers
- Exploring the LLM Journey from Cognition to Expression with Linear Representations [10.92882688742428]
This paper presents an in-depth examination of the evolution and interplay of cognitive and expressive capabilities in large language models (LLMs)
We define and explore the model's cognitive and expressive capabilities through linear representations across three critical phases: Pretraining, Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF)
Our findings unveil a sequential development pattern, where cognitive abilities are largely established during Pretraining, whereas expressive abilities predominantly advance during SFT and RLHF.
arXiv Detail & Related papers (2024-05-27T08:57:04Z) - Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers? [57.04803703952721]
Large language models (LLMs) have shown remarkable performances across a wide range of tasks.
However, the mechanisms by which these models encode tasks of varying complexities remain poorly understood.
We introduce the idea of Concept Depth'' to suggest that more complex concepts are typically acquired in deeper layers.
arXiv Detail & Related papers (2024-04-10T14:56:40Z) - Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations [3.4916237834391874]
Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
arXiv Detail & Related papers (2023-12-22T12:44:15Z) - SNeL: A Structured Neuro-Symbolic Language for Entity-Based Multimodal
Scene Understanding [0.0]
We introduce SNeL (Structured Neuro-symbolic Language), a versatile query language designed to facilitate nuanced interactions with neural networks processing multimodal data.
SNeL's expressive interface enables the construction of intricate queries, supporting logical and arithmetic operators, comparators, nesting, and more.
Our evaluations demonstrate SNeL's potential to reshape the way we interact with complex neural networks.
arXiv Detail & Related papers (2023-06-09T17:01:51Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - HINT: Hierarchical Neuron Concept Explainer [35.07575535848492]
We study hierarchical concepts inspired by the hierarchical cognition process of human beings.
We propose HIerarchical Neuron concepT explainer (HINT) to effectively build bidirectional associations between neurons and hierarchical concepts.
HINT enables us to systematically and quantitatively study whether and how the implicit hierarchical relationships of concepts are embedded into neurons.
arXiv Detail & Related papers (2022-03-27T03:25:36Z) - Grounding Psychological Shape Space in Convolutional Neural Networks [0.0]
We use convolutional neural networks to learn a generalizable mapping between perceptual inputs and a recently proposed psychological similarity space for the shape domain.
Our results indicate that a classification-based multi-task learning scenario yields the best results, but that its performance is relatively sensitive to the dimensionality of the similarity space.
arXiv Detail & Related papers (2021-11-16T12:21:07Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.