CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations
- URL: http://arxiv.org/abs/2506.16056v1
- Date: Thu, 19 Jun 2025 06:31:08 GMT
- Title: CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations
- Authors: Puchun Liu, C. L. Philip Chen, Yubin He, Tong Zhang,
- Abstract summary: CRIA is an adaptive framework that utilizes variable-length and variable-channel coding to achieve a unified representation of EEG data across different datasets.<n>The model employs a cross-attention mechanism to fuse temporal, spectral, and spatial features effectively.<n> Experimental results on the Temple University EEG corpus and the CHB-MIT dataset show that CRIA outperforms existing methods with the same pre-training conditions.
- Score: 52.251569042852815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The difficulty of extracting deep features from EEG data and effectively integrating information from multiple views presents significant challenges for developing a generalizable pretraining framework for EEG representation learning. However, most existing pre-training methods rely solely on the contextual semantics of a single view, failing to capture the complex and synergistic interactions among different perspectives, limiting the expressiveness and generalization of learned representations. To address these issues, this paper proposes CRIA, an adaptive framework that utilizes variable-length and variable-channel coding to achieve a unified representation of EEG data across different datasets. In this work, we define cross-view information as the integrated representation that emerges from the interaction among temporal, spectral, and spatial views of EEG signals. The model employs a cross-attention mechanism to fuse temporal, spectral, and spatial features effectively, and combines an attention matrix masking strategy based on the information bottleneck principle with a novel viewpoint masking pre-training scheme. Experimental results on the Temple University EEG corpus and the CHB-MIT dataset show that CRIA outperforms existing methods with the same pre-training conditions, achieving a balanced accuracy of 57.02% for multi-class event classification and 80.03% for anomaly detection, highlighting its strong generalization ability.
Related papers
- HAMLET-FFD: Hierarchical Adaptive Multi-modal Learning Embeddings Transformation for Face Forgery Detection [6.060036926093259]
HAMLET-FFD is a cross-domain generalization framework for face forgery detection.<n>It integrates visual evidence with conceptual cues, emulating expert forensic analysis.<n>By design, HAMLET-FFD freezes all pretrained parameters, serving as an external plugin.
arXiv Detail & Related papers (2025-07-28T15:09:52Z) - FACE: Few-shot Adapter with Cross-view Fusion for Cross-subject EEG Emotion Recognition [57.08108545219043]
Cross-subject EEG emotion recognition is challenged by significant inter-subject variability and intricately entangled intra-subject variability.<n>Recent few-shot learning paradigms attempt to address these limitations but often encounter catastrophic overfitting during subject-specific adaptation with limited samples.<n>This article introduces the few-shot adapter with a cross-view fusion method called FACE for cross-subject EEG emotion recognition.
arXiv Detail & Related papers (2025-03-24T03:16:52Z) - SplitSEE: A Splittable Self-supervised Framework for Single-Channel EEG Representation Learning [8.373376507515347]
SplitSEE is a self-supervised framework for effective temporal-frequency representation learning in single-channel EEG.
It learns representations solely from single-channel EEG but has even outperformed multi-channel baselines.
It achieves high and stable performance using partial model layers.
arXiv Detail & Related papers (2024-10-15T02:34:33Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion [49.1574468325115]
A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
arXiv Detail & Related papers (2023-11-13T22:46:43Z) - Learning Multiscale Consistency for Self-supervised Electron Microscopy
Instance Segmentation [48.267001230607306]
We propose a pretraining framework that enhances multiscale consistency in EM volumes.
Our approach leverages a Siamese network architecture, integrating strong and weak data augmentations.
It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis.
arXiv Detail & Related papers (2023-08-19T05:49:13Z) - Surface EMG-Based Inter-Session/Inter-Subject Gesture Recognition by
Leveraging Lightweight All-ConvNet and Transfer Learning [17.535392299244066]
Gesture recognition using low-resolution instantaneous HD-sEMG images opens up new avenues for the development of more fluid and natural muscle-computer interfaces.
The data variability between inter-session and inter-subject scenarios presents a great challenge.
Existing approaches employed very large and complex deep ConvNet or 2SRNN-based domain adaptation methods to approximate the distribution shift caused by these inter-session and inter-subject data variability.
We propose a lightweight All-ConvNet+TL model that leverages lightweight All-ConvNet and transfer learning (TL) for the enhancement of inter-session and inter-subject gesture recognition
arXiv Detail & Related papers (2023-05-13T21:47:55Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - A Variational Information Bottleneck Approach to Multi-Omics Data
Integration [98.6475134630792]
We propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.
Our method applies the IB framework on marginal and joint representations of the observed views to focus on intra-view and inter-view interactions that are relevant for the target.
Experiments on real-world datasets show that our method consistently achieves gain from data integration and outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-02-05T06:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.