Evaluating Contrastive Learning on Wearable Timeseries for Downstream
Clinical Outcomes
- URL: http://arxiv.org/abs/2111.07089v1
- Date: Sat, 13 Nov 2021 10:48:17 GMT
- Title: Evaluating Contrastive Learning on Wearable Timeseries for Downstream
Clinical Outcomes
- Authors: Kevalee Shah, Dimitris Spathis, Chi Ian Tang, Cecilia Mascolo
- Abstract summary: Self-supervised approaches that use contrastive losses, such as SimCLR and BYOL, can be applied to high-dimensional health signals.
We show that SimCLR outperforms the adversarial method and a fully-supervised method in the majority of the downstream evaluation tasks.
- Score: 10.864821932376833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vast quantities of person-generated health data (wearables) are collected but
the process of annotating to feed to machine learning models is impractical.
This paper discusses ways in which self-supervised approaches that use
contrastive losses, such as SimCLR and BYOL, previously applied to the vision
domain, can be applied to high-dimensional health signals for downstream
classification tasks of various diseases spanning sleep, heart, and metabolic
conditions. To this end, we adapt the data augmentation step and the overall
architecture to suit the temporal nature of the data (wearable traces) and
evaluate on 5 downstream tasks by comparing other state-of-the-art methods
including supervised learning and an adversarial unsupervised representation
learning method. We show that SimCLR outperforms the adversarial method and a
fully-supervised method in the majority of the downstream evaluation tasks, and
that all self-supervised methods outperform the fully-supervised methods. This
work provides a comprehensive benchmark for contrastive methods applied to the
wearable time-series domain, showing the promise of task-agnostic
representations for downstream clinical outcomes.
Related papers
- Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Semi-Supervised Relational Contrastive Learning [8.5285439285139]
We present a novel semi-supervised learning model that leverages self-supervised contrastive loss and consistency.
We validate against the ISIC 2018 Challenge benchmark skin lesion classification and demonstrate the effectiveness of our method on varying amounts of labeled data.
arXiv Detail & Related papers (2023-04-11T08:14:30Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Simulation-to-Real domain adaptation with teacher-student learning for
endoscopic instrument segmentation [1.1047993346634768]
We introduce a teacher-student learning approach that learns jointly from annotated simulation data and unlabeled real data.
Empirical results on three datasets highlight the effectiveness of the proposed framework.
arXiv Detail & Related papers (2021-03-02T09:30:28Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Improving colonoscopy lesion classification using semi-supervised deep
learning [2.568264809297699]
Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data.
We demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images.
arXiv Detail & Related papers (2020-09-07T15:25:35Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.