Learning Beyond Similarities: Incorporating Dissimilarities between
Positive Pairs in Self-Supervised Time Series Learning
- URL: http://arxiv.org/abs/2309.07526v1
- Date: Thu, 14 Sep 2023 08:49:35 GMT
- Title: Learning Beyond Similarities: Incorporating Dissimilarities between
Positive Pairs in Self-Supervised Time Series Learning
- Authors: Adrian Atienza, Jakob Bardram, and Sadasivan Puthusserypady
- Abstract summary: This paper pioneers an SSL approach that transcends mere similarities by integrating dissimilarities among positive pairs.
The framework is applied to electrocardiogram (ECG) signals, leading to a notable enhancement of +10% in the detection accuracy of Atrial Fibrillation (AFib) across diverse subjects.
- Score: 4.2807943283312095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By identifying similarities between successive inputs, Self-Supervised
Learning (SSL) methods for time series analysis have demonstrated their
effectiveness in encoding the inherent static characteristics of temporal data.
However, an exclusive emphasis on similarities might result in representations
that overlook the dynamic attributes critical for modeling cardiovascular
diseases within a confined subject cohort. Introducing Distilled Encoding
Beyond Similarities (DEBS), this paper pioneers an SSL approach that transcends
mere similarities by integrating dissimilarities among positive pairs. The
framework is applied to electrocardiogram (ECG) signals, leading to a notable
enhancement of +10\% in the detection accuracy of Atrial Fibrillation (AFib)
across diverse subjects. DEBS underscores the potential of attaining a more
refined representation by encoding the dynamic characteristics of time series
data, tapping into dissimilarities during the optimization process. Broadly,
the strategy delineated in this study holds the promise of unearthing novel
avenues for advancing SSL methodologies tailored to temporal data.
Related papers
- Contrastive Learning with Synthetic Positives [11.932323457691945]
Contrastive learning with the nearest neighbor has proved to be one of the most efficient self-supervised learning (SSL) techniques.
In this paper, we introduce a novel approach called Contrastive Learning with Synthetic Positives (NCLP)
NCLP utilizes synthetic images, generated by an unconditional diffusion model, as the additional positives to help the model learn from diverse positives.
arXiv Detail & Related papers (2024-08-30T01:47:43Z) - Contrastive Learning Is Not Optimal for Quasiperiodic Time Series [4.2807943283312095]
We introduce Distilled Embedding for Almost-Periodic Time Series (DEAPS) in this paper.
DEAPS is a non-contrastive method tailored for quasiperiodic time series, such as electrocardiogram (ECG) data.
We demonstrate a notable improvement of +10% over existing SOTA methods when just a few annotated records are presented to fit a Machine Learning (ML) model.
arXiv Detail & Related papers (2024-07-24T08:02:41Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Unraveling the Temporal Dynamics of the Unet in Diffusion Models [33.326244121918634]
Diffusion models introduce Gaussian noise into training data and reconstruct the original data iteratively.
Central to this iterative process is a single Unet, adapting across time steps to facilitate generation.
Recent work revealed the presence of composition and denoising phases in this generation process.
arXiv Detail & Related papers (2023-12-17T04:40:33Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - T-Phenotype: Discovering Phenotypes of Predictive Temporal Patterns in
Disease Progression [82.85825388788567]
We develop a novel temporal clustering method, T-Phenotype, to discover phenotypes of predictive temporal patterns from labeled time-series data.
We show that T-Phenotype achieves the best phenotype discovery performance over all the evaluated baselines.
arXiv Detail & Related papers (2023-02-24T13:30:35Z) - Training Strategies for Improved Lip-reading [61.661446956793604]
We investigate the performance of state-of-the-art data augmentation approaches, temporal models and other training strategies.
A combination of all the methods results in a classification accuracy of 93.4%, which is an absolute improvement of 4.6% over the current state-of-the-art performance.
An error analysis of the various training strategies reveals that the performance improves by increasing the classification accuracy of hard-to-recognise words.
arXiv Detail & Related papers (2022-09-03T09:38:11Z) - Comparing Cross Correlation-Based Similarities [1.0152838128195467]
Multiset-based correlations based on the real-valued multiset Jaccard and coincidence indices are compared.
Results have immediate implications not only in pattern recognition and deep learning, but also in scientific modeling in general.
arXiv Detail & Related papers (2021-11-08T08:50:13Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and
Patients [17.58391771585294]
We propose a family of contrastive learning methods, CLOCS, that encourage representations across space, time, textitand patients to be similar to one another.
We show that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks.
Our training procedure naturally generates patient-specific representations that can be used to quantify patient-similarity.
arXiv Detail & Related papers (2020-05-27T09:25:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.