Contrastive random lead coding for channel-agnostic self-supervision of biosignals
- URL: http://arxiv.org/abs/2410.19842v1
- Date: Mon, 21 Oct 2024 09:33:45 GMT
- Title: Contrastive random lead coding for channel-agnostic self-supervision of biosignals
- Authors: Thea Brüsch, Mikkel N. Schmidt, Tommy S. Alstrøm,
- Abstract summary: We introduce contrastive random lead coding (CRLC) for channel-agnostic self-supervision of biosignals.
We validate our approach by pre-training models on EEG and ECG data, and then fine-tuning them for downstream tasks.
- Score: 3.101600812051321
- License:
- Abstract: Contrastive learning yields impressive results for self-supervision in computer vision. The approach relies on the creation of positive pairs, something which is often achieved through augmentations. However, for multivariate time series effective augmentations can be difficult to design. Additionally, the number of input channels for biosignal datasets often varies from application to application, limiting the usefulness of large self-supervised models trained with specific channel configurations. Motivated by these challenges, we set out to investigate strategies for creation of positive pairs for channel-agnostic self-supervision of biosignals. We introduce contrastive random lead coding (CRLC), where random subsets of the input channels are used to create positive pairs and compare with using augmentations and neighboring segments in time as positive pairs. We validate our approach by pre-training models on EEG and ECG data, and then fine-tuning them for downstream tasks. CRLC outperforms competing strategies in both scenarios in the channel-agnostic setting. For EEG, the approach additionally outperforms the state-of-the-art reference model. Notably, for EEG tasks CRLC surpasses the current state-of-the-art reference model. While, the state-of-the-art reference model is superior in the ECG task, incorporating CRLC allows us to obtain comparable results. In conclusion, CRLC helps generalization across variable channel setups when training our channel-agnostic model.
Related papers
- Channel-Aware Low-Rank Adaptation in Time Series Forecasting [43.684035409535696]
Two representative channel strategies are closely associated with model expressivity and robustness.
We present a channel-aware low-rank adaptation method to condition CD models on identity-aware individual components.
arXiv Detail & Related papers (2024-07-24T13:05:17Z) - SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion [59.96233305733875]
Time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare.
Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations.
This paper presents an efficient-based model, the Series-cOre Fused Time Series forecaster (SOFTS)
arXiv Detail & Related papers (2024-04-22T14:06:35Z) - From Similarity to Superiority: Channel Clustering for Time Series Forecasting [61.96777031937871]
We develop a novel and adaptable Channel Clustering Module ( CCM)
CCM dynamically groups channels characterized by intrinsic similarities and leverages cluster information instead of individual channel identities.
CCM can boost the performance of CI and CD models by an average margin of 2.4% and 7.2% on long-term and short-term forecasting, respectively.
arXiv Detail & Related papers (2024-03-31T02:46:27Z) - MCformer: Multivariate Time Series Forecasting with Mixed-Channels Transformer [8.329947472853029]
Channel Independence (CI) strategy treats all channels as a single channel, expanding the dataset.
Mixed Channels strategy combines the data expansion advantages of the CI strategy with the ability to counteract inter-channel correlation forgetting.
Model blends a specific number of channels, leveraging an attention mechanism to effectively capture inter-channel correlation information.
arXiv Detail & Related papers (2024-03-14T09:43:07Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - A Transformer Model for Boundary Detection in Continuous Sign Language [55.05986614979846]
The Transformer model is employed for both Isolated Sign Language Recognition and Continuous Sign Language Recognition.
The training process involves using isolated sign videos, where hand keypoint features extracted from the input video are enriched.
The trained model, coupled with a post-processing method, is then applied to detect isolated sign boundaries within continuous sign videos.
arXiv Detail & Related papers (2024-02-22T17:25:01Z) - Augmentation-induced Consistency Regularization for Classification [25.388324221293203]
We propose a consistency regularization framework based on data augmentation, called CR-Aug.
CR-Aug forces the output distributions of different sub models generated by data augmentation to be consistent with each other.
We implement CR-Aug to image and audio classification tasks and conduct extensive experiments to verify its effectiveness.
arXiv Detail & Related papers (2022-05-25T03:15:36Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Learning Signal Representations for EEG Cross-Subject Channel Selection
and Trial Classification [0.3553493344868413]
We introduce an algorithm for subject-independent channel selection of EEG recordings.
It exploits channel-specific 1D-Convolutional Neural Networks (1D-CNNs) as feature extractors in a supervised fashion to maximize class separability.
After training, the algorithm can be exploited by transferring only the parametrized subgroup of selected channel-specific 1D-CNNs to new signals from new subjects.
arXiv Detail & Related papers (2021-06-20T06:22:16Z) - End-to-end learnable EEG channel selection with deep neural networks [72.21556656008156]
We propose a framework to embed the EEG channel selection in the neural network itself.
We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters.
This generic approach is evaluated on two different EEG tasks.
arXiv Detail & Related papers (2021-02-11T13:44:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.