CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and
Patients
- URL: http://arxiv.org/abs/2005.13249v3
- Date: Sun, 16 May 2021 13:12:14 GMT
- Title: CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and
Patients
- Authors: Dani Kiyasseh, Tingting Zhu, David A. Clifton
- Abstract summary: We propose a family of contrastive learning methods, CLOCS, that encourage representations across space, time, textitand patients to be similar to one another.
We show that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks.
Our training procedure naturally generates patient-specific representations that can be used to quantify patient-similarity.
- Score: 17.58391771585294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The healthcare industry generates troves of unlabelled physiological data.
This data can be exploited via contrastive learning, a self-supervised
pre-training method that encourages representations of instances to be similar
to one another. We propose a family of contrastive learning methods, CLOCS,
that encourages representations across space, time, \textit{and} patients to be
similar to one another. We show that CLOCS consistently outperforms the
state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation
of, and fine-tuning on, downstream tasks. We also show that CLOCS achieves
strong generalization performance with only 25\% of labelled training data.
Furthermore, our training procedure naturally generates patient-specific
representations that can be used to quantify patient-similarity.
Related papers
- The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - VAE-IF: Deep feature extraction with averaging for fully unsupervised artifact detection in routinely acquired ICU time-series [1.9665926763554147]
We propose a novel fully unsupervised approach to detect artifacts in minute-by-minute resolution ICU data without prior labeling or signal-specific knowledge.
Our approach combines a variational autoencoder (VAE) and an isolation forest (IF) into a hybrid model to learn features and identify anomalies.
We show that our unsupervised approach achieves comparable sensitivity to fully supervised methods and generalizes well to an external dataset.
arXiv Detail & Related papers (2023-12-10T18:03:40Z) - Learning Beyond Similarities: Incorporating Dissimilarities between
Positive Pairs in Self-Supervised Time Series Learning [4.2807943283312095]
This paper pioneers an SSL approach that transcends mere similarities by integrating dissimilarities among positive pairs.
The framework is applied to electrocardiogram (ECG) signals, leading to a notable enhancement of +10% in the detection accuracy of Atrial Fibrillation (AFib) across diverse subjects.
arXiv Detail & Related papers (2023-09-14T08:49:35Z) - VAESim: A probabilistic approach for self-supervised prototype discovery [0.23624125155742057]
We propose an architecture for image stratification based on a conditional variational autoencoder.
We use a continuous latent space to represent the continuum of disorders and find clusters during training, which can then be used for image/patient stratification.
We demonstrate that our method outperforms baselines in terms of kNN accuracy measured on a classification task against a standard VAE.
arXiv Detail & Related papers (2022-09-25T17:55:31Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Evaluating Contrastive Learning on Wearable Timeseries for Downstream
Clinical Outcomes [10.864821932376833]
Self-supervised approaches that use contrastive losses, such as SimCLR and BYOL, can be applied to high-dimensional health signals.
We show that SimCLR outperforms the adversarial method and a fully-supervised method in the majority of the downstream evaluation tasks.
arXiv Detail & Related papers (2021-11-13T10:48:17Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Temporal Phenotyping using Deep Predictive Clustering of Disease
Progression [97.88605060346455]
We develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest.
Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-06-15T20:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.