Multi-Sensory Cognitive Computing for Learning Population-level Brain Connectivity
- URL: http://arxiv.org/abs/2508.11436v1
- Date: Fri, 15 Aug 2025 12:38:39 GMT
- Title: Multi-Sensory Cognitive Computing for Learning Population-level Brain Connectivity
- Authors: Mayssa Soussia, Mohamed Ali Mahjoub, Islem Rekik,
- Abstract summary: mCOCO is a novel framework that learns population-level functional CBT from BOLD signals.<n> RC's dynamic system properties allow for tracking state changes over time, enhancing interpretability and enabling the modeling of brain-like dynamics.<n>Our mCOCO framework consists of two phases: (1) mapping BOLD signals into the reservoir to derive individual functional connectomes, which are then aggregated into a group-level CBT.
- Score: 8.588898349347149
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The generation of connectional brain templates (CBTs) has recently garnered significant attention for its potential to identify unique connectivity patterns shared across individuals. However, existing methods for CBT learning such as conventional machine learning and graph neural networks (GNNs) are hindered by several limitations. These include: (i) poor interpretability due to their black-box nature, (ii) high computational cost, and (iii) an exclusive focus on structure and topology, overlooking the cognitive capacity of the generated CBT. To address these challenges, we introduce mCOCO (multi-sensory COgnitive COmputing), a novel framework that leverages Reservoir Computing (RC) to learn population-level functional CBT from BOLD (Blood-Oxygen-level-Dependent) signals. RC's dynamic system properties allow for tracking state changes over time, enhancing interpretability and enabling the modeling of brain-like dynamics, as demonstrated in prior literature. By integrating multi-sensory inputs (e.g., text, audio, and visual data), mCOCO captures not only structure and topology but also how brain regions process information and adapt to cognitive tasks such as sensory processing, all in a computationally efficient manner. Our mCOCO framework consists of two phases: (1) mapping BOLD signals into the reservoir to derive individual functional connectomes, which are then aggregated into a group-level CBT - an approach, to the best of our knowledge, not previously explored in functional connectivity studies - and (2) incorporating multi-sensory inputs through a cognitive reservoir, endowing the CBT with cognitive traits. Extensive evaluations show that our mCOCO-based template significantly outperforms GNN-based CBT in terms of centeredness, discriminativeness, topological soundness, and multi-sensory memory retention. Our source code is available at https://github.com/basiralab/mCOCO.
Related papers
- Pupillometry and Brain Dynamics for Cognitive Load in Working Memory [0.1631115063641726]
This study integrates feature-based and model-driven approaches to advance time-series analysis.<n>Using the OpenNeuro 'Digit Span Task' dataset, this study investigates cognitive load classification from EEG and pupillometry.<n>The findings demonstrate that pupillometry alone can compete with EEG, serving as a portable and practical proxy for real-world applications.
arXiv Detail & Related papers (2026-02-11T08:05:47Z) - CogGNN: Cognitive Graph Neural Networks in Generative Connectomics [10.391115198133063]
We introduce the first cognified generative model, CogGNN, to generate brain networks that preserve cognitive features.<n>Our contributions are: (i) a novel cognition-aware generative model with a visual-memory-based loss; (ii) a CBT-learning framework with a co-optimization strategy to yield well-centered, discriminative, cognitively enhanced templates.
arXiv Detail & Related papers (2025-09-13T15:38:56Z) - Neuro-Informed Joint Learning Enhances Cognitive Workload Decoding in Portable BCIs [9.198002030833328]
Muse headbands offer unprecedented mobility for daily brain-computer interface applications.<n>Non-stationarity in portable EEG signals constrains data fidelity and decoding accuracy.<n>We propose MuseCogNet, a unified joint learning framework integrating self-supervised and supervised training paradigms.
arXiv Detail & Related papers (2025-06-30T01:42:31Z) - CSBrain: A Cross-scale Spatiotemporal Brain Foundation Model for EEG Decoding [57.90382885533593]
We propose a Cross-scale Spatiotemporal Brain foundation model for generalized decoding EEG signals.<n>We show that CSBrain consistently outperforms task-specific and foundation model baselines.<n>These results establish cross-scale modeling as a key inductive bias and position CSBrain as a robust backbone for future brain-AI research.
arXiv Detail & Related papers (2025-06-29T03:29:34Z) - MBrain: A Multi-channel Self-Supervised Learning Framework for Brain
Signals [7.682832730967219]
We study the self-supervised learning framework for brain signals that can be applied to pre-train either SEEG or EEG data.
Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels.
Our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice.
arXiv Detail & Related papers (2023-06-15T09:14:26Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Deep Cross-Modality and Resolution Graph Integration for Universal Brain
Connectivity Mapping and Augmentation [0.0]
The connectional brain template (CBT) captures the shared traits across all individuals of a given population of brain connectomes.
Here, we propose the first multimodal multiresolution graph integration framework that maps a given connectomic population into a well centered CBT.
We show that our framework significantly outperforms benchmarks in reconstruction quality, augmentation task, centeredness and topological soundness.
arXiv Detail & Related papers (2022-09-13T14:04:12Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Siamese Neural Networks for EEG-based Brain-computer Interfaces [18.472950822801362]
We propose a new EEG processing and feature extraction paradigm based on Siamese neural networks.
Siamese architecture is developed based on Convolutional Neural Networks (CNN) and provides a binary output on the similarity of two inputs.
The efficacy of this architecture is evaluated on a 4-class Motor Imagery dataset from Brain-computer Interfaces (BCI) Competition IV-2a.
arXiv Detail & Related papers (2020-02-03T17:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.