Foundation Models for Cross-Domain EEG Analysis Application: A Survey
- URL: http://arxiv.org/abs/2508.15716v2
- Date: Fri, 22 Aug 2025 08:07:44 GMT
- Title: Foundation Models for Cross-Domain EEG Analysis Application: A Survey
- Authors: Hongqi Li, Yitong Chen, Yujuan Wang, Weihang Ni, Haodong Zhang,
- Abstract summary: This study presents the first comprehensive modality-oriented taxonomy for foundation models in EEG analysis.<n>We rigorously analyze each category's research ideas, theoretical foundations, and architectural innovations.<n>Our work accelerates the translation of EEG foundation models into scalable, interpretable, and online actionable solutions.
- Score: 11.294318502037589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electroencephalography (EEG) analysis stands at the forefront of neuroscience and artificial intelligence research, where foundation models are reshaping the traditional EEG analysis paradigm by leveraging their powerful representational capacity and cross-modal generalization. However, the rapid proliferation of these techniques has led to a fragmented research landscape, characterized by diverse model roles, inconsistent architectures, and a lack of systematic categorization. To bridge this gap, this study presents the first comprehensive modality-oriented taxonomy for foundation models in EEG analysis, systematically organizing research advances based on output modalities of the native EEG decoding, EEG-text, EEG-vision, EEG-audio, and broader multimodal frameworks. We rigorously analyze each category's research ideas, theoretical foundations, and architectural innovations, while highlighting open challenges such as model interpretability, cross-domain generalization, and real-world applicability in EEG-based systems. By unifying this dispersed field, our work not only provides a reference framework for future methodology development but accelerates the translation of EEG foundation models into scalable, interpretable, and online actionable solutions.
Related papers
- WaveMind: Towards a Conversational EEG Foundation Model Aligned to Textual and Visual Modalities [55.00677513249723]
EEG signals simultaneously encode both cognitive processes and intrinsic neural states.<n>We map EEG signals and their corresponding modalities into a unified semantic space to achieve generalized interpretation.<n>The resulting model demonstrates robust classification accuracy while supporting flexible, open-ended conversations.
arXiv Detail & Related papers (2025-09-26T06:21:51Z) - EEG-FM-Bench: A Comprehensive Benchmark for the Systematic Evaluation of EEG Foundation Models [16.433809341013113]
EEG-FM-Bench is the first comprehensive benchmark for the systematic and standardized evaluation of EEG foundation models (EEG-FMs)<n>Our contributions are threefold: (1) we curate a diverse suite of downstream tasks and datasets from canonical EEG paradigms, implementing standardized processing and evaluation protocols within a unified open-source framework; (2) we benchmark prominent state-of-the-art foundation models to establish comprehensive baseline results for a clear comparison of the current landscape; (3) we perform qualitative analyses to provide insights into model behavior and inform future architectural design.
arXiv Detail & Related papers (2025-08-25T07:34:33Z) - EEG Foundation Models: A Critical Review of Current Progress and Future Directions [4.096453902709292]
Self-supervised EEG encoders have sparked a transition towards general-purpose EEG foundation models (EEG-FMs)<n>This study reviews 10 early EEG-FMs and presents a critical synthesis of their methodology, empirical findings, and outstanding research gaps.
arXiv Detail & Related papers (2025-07-15T22:52:44Z) - Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - Graph Foundation Models: A Comprehensive Survey [66.74249119139661]
Graph Foundation Models (GFMs) aim to bring scalable, general-purpose intelligence to structured data.<n>This survey provides a comprehensive overview of GFMs, unifying diverse efforts under a modular framework.<n>GFMs are poised to become foundational infrastructure for open-ended reasoning over structured data.
arXiv Detail & Related papers (2025-05-21T05:08:00Z) - PyTDC: A multimodal machine learning training, evaluation, and inference platform for biomedical foundation models [59.17570021208177]
PyTDC is a machine-learning platform providing streamlined training, evaluation, and inference software for multimodal biological AI models.<n>This paper discusses the components of PyTDC's architecture and, to our knowledge, the first-of-its-kind case study on the introduced single-cell drug-target nomination ML task.
arXiv Detail & Related papers (2025-05-08T18:15:38Z) - A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.<n>We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)<n>We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Large Cognition Model: Towards Pretrained EEG Foundation Model [0.0]
We propose a transformer-based foundation model designed to generalize across diverse EEG datasets and downstream tasks.<n>Our findings highlight the potential of pretrained EEG foundation models to accelerate advancements in neuroscience, personalized medicine, and BCI technology.
arXiv Detail & Related papers (2025-02-11T04:28:10Z) - GEFM: Graph-Enhanced EEG Foundation Model [16.335330142000657]
Foundation models offer a promising solution by leveraging large-scale unlabeled data through pre-training.<n>We propose Graph-Enhanced EEG Foundation Model (GEFM), a novel foundation model for EEG that integrates both temporal and inter-channel information.<n>Our architecture combines Graph Neural Networks (GNNs), which effectively capture relational structures, with a masked autoencoder to enable efficient pre-training.
arXiv Detail & Related papers (2024-11-29T06:57:50Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Semantic interoperability based on the European Materials and Modelling
Ontology and its ontological paradigm: Mereosemiotics [0.0]
European Materials and Modelling Ontology (EMMO) has recently been advanced in the computational molecular engineering and multi-scale modelling communities as a top-level.
This work explores how top-level that are based on the same paradigm - the same set of fundamental.
ontologys - as the EMMO can be applied to.
models of physical systems and their use in computational engineering practice.
arXiv Detail & Related papers (2020-03-22T13:19:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.