Bridging Brain with Foundation Models through Self-Supervised Learning
- URL: http://arxiv.org/abs/2506.16009v1
- Date: Thu, 19 Jun 2025 04:03:58 GMT
- Title: Bridging Brain with Foundation Models through Self-Supervised Learning
- Authors: Hamdi Altaheri, Fakhri Karray, Md. Milon Islam, S M Taslim Uddin Raju, Amir-Hossein Karimi,
- Abstract summary: Foundation models (FMs) have redefined the capabilities of artificial intelligence.<n>These advances present a transformative opportunity for brain signal analysis.<n>This survey systematically reviews the emerging field of bridging brain signals with foundation models.
- Score: 5.0273296425814635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation models (FMs), powered by self-supervised learning (SSL), have redefined the capabilities of artificial intelligence, demonstrating exceptional performance in domains like natural language processing and computer vision. These advances present a transformative opportunity for brain signal analysis. Unlike traditional supervised learning, which is limited by the scarcity of labeled neural data, SSL offers a promising solution by enabling models to learn meaningful representations from unlabeled data. This is particularly valuable in addressing the unique challenges of brain signals, including high noise levels, inter-subject variability, and low signal-to-noise ratios. This survey systematically reviews the emerging field of bridging brain signals with foundation models through the innovative application of SSL. It explores key SSL techniques, the development of brain-specific foundation models, their adaptation to downstream tasks, and the integration of brain signals with other modalities in multimodal SSL frameworks. The review also covers commonly used evaluation metrics and benchmark datasets that support comparative analysis. Finally, it highlights key challenges and outlines future research directions. This work aims to provide researchers with a structured understanding of this rapidly evolving field and a roadmap for developing generalizable brain foundation models powered by self-supervision.
Related papers
- AdaBrain-Bench: Benchmarking Brain Foundation Models for Brain-Computer Interface Applications [52.91583053243446]
Non-invasive Brain-Computer Interfaces (BCI) offer a safe and accessible means of connecting the human brain to external devices.<n>Recently, the adoption of self-supervised pre-training is transforming the landscape of non-invasive BCI research.<n>AdaBrain-Bench is a standardized benchmark to evaluate brain foundation models in widespread non-invasive BCI tasks.
arXiv Detail & Related papers (2025-07-14T03:37:41Z) - CSBrain: A Cross-scale Spatiotemporal Brain Foundation Model for EEG Decoding [57.90382885533593]
We propose a Cross-scale Spatiotemporal Brain foundation model for generalized decoding EEG signals.<n>We show that CSBrain consistently outperforms task-specific and foundation model baselines.<n>These results establish cross-scale modeling as a key inductive bias and position CSBrain as a robust backbone for future brain-AI research.
arXiv Detail & Related papers (2025-06-29T03:29:34Z) - Concept-Guided Interpretability via Neural Chunking [54.73787666584143]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract these emerging entities, complementing each other based on label availability and dimensionality.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Brain Foundation Models: A Survey on Advancements in Neural Signal Processing and Brain Discovery [20.558821847407895]
Brain foundation models (BFMs) have emerged as a transformative paradigm in computational neuroscience.<n>BFMs leverage large-scale pre-training techniques, allowing them to generalize effectively across multiple scenarios, tasks, and modalities.<n>In this survey, we define BFMs for the first time, providing a clear and concise framework for constructing and utilizing these models in various applications.
arXiv Detail & Related papers (2025-03-01T18:12:50Z) - Generative forecasting of brain activity enhances Alzheimer's classification and interpretation [16.09844316281377]
Resting-state functional magnetic resonance imaging (rs-fMRI) offers a non-invasive method to monitor neural activity.
Deep learning has shown promise in capturing these representations.
In this study, we focus on time series forecasting of independent component networks derived from rs-fMRI as a form of data augmentation.
arXiv Detail & Related papers (2024-10-30T23:51:31Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer's Disease Progression Analysis [6.6584447062231895]
Alzheimer's Dementia (AD) represents one of the most pressing challenges in the field of neurodegenerative disorders.
Recent advancements in deep learning and various representation learning strategies, including self-supervised learning (SSL), have shown significant promise in enhancing medical image analysis.
We propose a novel framework, Time and Even-aware SSL (TE-SSL), which integrates time-to-event and event data as supervisory signals to refine the learning process.
arXiv Detail & Related papers (2024-07-09T13:41:32Z) - UMBRAE: Unified Multimodal Brain Decoding [43.6339793925953]
We propose UMBRAE, a unified multimodal decoding of brain signals.
We introduce an efficient universal brain encoder for multimodal-brain alignment.
We also introduce a cross-subject training strategy mapping subject-specific features to a common feature space.
arXiv Detail & Related papers (2024-04-10T17:59:20Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Evaluating deep transfer learning for whole-brain cognitive decoding [11.898286908882561]
Transfer learning (TL) is well-suited to improve the performance of deep learning (DL) models in datasets with small numbers of samples.
Here, we evaluate TL for the application of DL models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data.
arXiv Detail & Related papers (2021-11-01T15:44:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.