SCEHR: Supervised Contrastive Learning for Clinical Risk Prediction
using Electronic Health Records
- URL: http://arxiv.org/abs/2110.04943v1
- Date: Mon, 11 Oct 2021 00:32:17 GMT
- Title: SCEHR: Supervised Contrastive Learning for Clinical Risk Prediction
using Electronic Health Records
- Authors: Chengxi Zang, Fei Wang
- Abstract summary: We extend the supervised contrastive learning framework to clinical risk prediction problems based on longitudinal electronic health records (EHR)
Our proposed loss functions show benefits in improving the performance of strong baselines and even state-of-the-art models on benchmarking tasks for clinical risk predictions.
Our loss functions can be easily used to replace (binary or multi-label) cross-entropy loss adopted in existing clinical predictive models.
- Score: 24.35874264767865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning has demonstrated promising performance in image and text
domains either in a self-supervised or a supervised manner. In this work, we
extend the supervised contrastive learning framework to clinical risk
prediction problems based on longitudinal electronic health records (EHR). We
propose a general supervised contrastive loss $\mathcal{L}_{\text{Contrastive
Cross Entropy} } + \lambda \mathcal{L}_{\text{Supervised Contrastive
Regularizer}}$ for learning both binary classification (e.g. in-hospital
mortality prediction) and multi-label classification (e.g. phenotyping) in a
unified framework. Our supervised contrastive loss practices the key idea of
contrastive learning, namely, pulling similar samples closer and pushing
dissimilar ones apart from each other, simultaneously by its two components:
$\mathcal{L}_{\text{Contrastive Cross Entropy} }$ tries to contrast samples
with learned anchors which represent positive and negative clusters, and
$\mathcal{L}_{\text{Supervised Contrastive Regularizer}}$ tries to contrast
samples with each other according to their supervised labels. We propose two
versions of the above supervised contrastive loss and our experiments on
real-world EHR data demonstrate that our proposed loss functions show benefits
in improving the performance of strong baselines and even state-of-the-art
models on benchmarking tasks for clinical risk predictions. Our loss functions
work well with extremely imbalanced data which are common for clinical risk
prediction problems. Our loss functions can be easily used to replace (binary
or multi-label) cross-entropy loss adopted in existing clinical predictive
models. The Pytorch code is released at
\url{https://github.com/calvin-zcx/SCEHR}.
Related papers
- Prototypical Contrastive Learning through Alignment and Uniformity for
Recommendation [6.790779112538357]
We present underlinePrototypical contrastive learning through underlineAlignment and underlineUniformity for recommendation.
Specifically, we first propose prototypes as a latent space to ensure consistency across different augmentations from the origin graph.
The absence of explicit negatives means that directly optimizing the consistency loss between instance and prototype could easily result in dimensional collapse issues.
arXiv Detail & Related papers (2024-02-03T08:19:26Z) - Tuned Contrastive Learning [77.67209954169593]
We propose a novel contrastive loss function -- Tuned Contrastive Learning (TCL) loss.
TCL generalizes to multiple positives and negatives in a batch and offers parameters to tune and improve the gradient responses from hard positives and hard negatives.
We show how to extend TCL to self-supervised setting and empirically compare it with various SOTA self-supervised learning methods.
arXiv Detail & Related papers (2023-05-18T03:26:37Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Positive-Negative Equal Contrastive Loss for Semantic Segmentation [8.664491798389662]
Previous works commonly design plug-and-play modules and structural losses to effectively extract and aggregate the global context.
We propose Positive-Negative Equal contrastive loss (PNE loss), which increases the latent impact of positive embedding on the anchor and treats the positive as well as negative sample pairs equally.
We conduct comprehensive experiments and achieve state-of-the-art performance on two benchmark datasets.
arXiv Detail & Related papers (2022-07-04T13:51:29Z) - Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation [10.877450596327407]
We present ACTION, an Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised medical image segmentation.
We first develop an iterative contrastive distillation algorithm by softly labeling the negatives rather than binary supervision between positive and negative pairs.
We also capture more semantically similar features from the randomly chosen negative set compared to the positives to enforce the diversity of the sampled data.
arXiv Detail & Related papers (2022-06-06T01:30:03Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Asymptotic Behavior of Adversarial Training in Binary Classification [41.7567932118769]
Adversarial training is considered to be the state-of-the-art method for defense against adversarial attacks.
Despite being successful in practice, several problems in understanding performance of adversarial training remain open.
We derive precise theoretical predictions for the minimization of adversarial training in binary classification.
arXiv Detail & Related papers (2020-10-26T01:44:20Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.