Multi-view self-supervised learning for multivariate variable-channel
time series
- URL: http://arxiv.org/abs/2307.09614v2
- Date: Thu, 20 Jul 2023 11:36:52 GMT
- Title: Multi-view self-supervised learning for multivariate variable-channel
time series
- Authors: Thea Br\"usch, Mikkel N. Schmidt, Tommy S. Alstr{\o}m
- Abstract summary: We propose learning one encoder to operate on all input channels individually.
We then use a message passing neural network to extract a single representation across channels.
We show that our method, combined with the TS2Vec loss, outperforms all other methods in most settings.
- Score: 1.094320514634939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Labeling of multivariate biomedical time series data is a laborious and
expensive process. Self-supervised contrastive learning alleviates the need for
large, labeled datasets through pretraining on unlabeled data. However, for
multivariate time series data, the set of input channels often varies between
applications, and most existing work does not allow for transfer between
datasets with different sets of input channels. We propose learning one encoder
to operate on all input channels individually. We then use a message passing
neural network to extract a single representation across channels. We
demonstrate the potential of this method by pretraining our model on a dataset
with six EEG channels and then fine-tuning it on a dataset with two different
EEG channels. We compare models with and without the message passing neural
network across different contrastive loss functions. We show that our method,
combined with the TS2Vec loss, outperforms all other methods in most settings.
Related papers
- Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - BIOT: Cross-data Biosignal Learning in the Wild [36.22753628246332]
Current deep learning models for biosignals are typically specialized for specific datasets and clinical settings.
method model is versatile and applicable to various biosignal learning settings across different datasets.
arXiv Detail & Related papers (2023-05-10T19:26:58Z) - Long-Short Temporal Co-Teaching for Weakly Supervised Video Anomaly
Detection [14.721615285883423]
Weakly supervised anomaly detection (WS-VAD) is a challenging problem that aims to learn VAD models only with video-level annotations.
Our proposed method is able to better deal with anomalies with varying durations as well as subtle anomalies.
arXiv Detail & Related papers (2023-03-31T13:28:06Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Scalable Classifier-Agnostic Channel Selection for MTSC [7.94957965474334]
Current time series classification algorithms need hundreds of compute hours to complete training and prediction.
We propose and evaluate two methods for channel selection.
Channel selection is applied as a pre-processing step before training state-of-the-art MTSC algorithms.
arXiv Detail & Related papers (2022-06-18T19:57:46Z) - Multimodal Masked Autoencoders Learn Transferable Representations [127.35955819874063]
We propose a simple and scalable network architecture, the Multimodal Masked Autoencoder (M3AE)
M3AE learns a unified encoder for both vision and language data via masked token prediction.
We provide an empirical study of M3AE trained on a large-scale image-text dataset, and find that M3AE is able to learn generalizable representations that transfer well to downstream tasks.
arXiv Detail & Related papers (2022-05-27T19:09:42Z) - Self-Supervised Multi-Object Tracking with Cross-Input Consistency [5.8762433393846045]
We propose a self-supervised learning procedure for training a robust multi-object tracking (MOT) model given only unlabeled video.
We then compute tracks in that sequence by applying an RNN model independently on each input, and train the model to produce consistent tracks across the two inputs.
arXiv Detail & Related papers (2021-11-10T21:00:34Z) - Multi-Channel End-to-End Neural Diarization with Distributed Microphones [53.99406868339701]
We replace Transformer encoders in EEND with two types of encoders that process a multi-channel input.
We also propose a model adaptation method using only single-channel recordings.
arXiv Detail & Related papers (2021-10-10T03:24:03Z) - Learning from Heterogeneous EEG Signals with Differentiable Channel
Reordering [51.633889765162685]
CHARM is a method for training a single neural network across inconsistent input channels.
We perform experiments on four EEG classification datasets and demonstrate the efficacy of CHARM.
arXiv Detail & Related papers (2020-10-21T12:32:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.