TINC: Temporally Informed Non-Contrastive Learning for Disease
Progression Modeling in Retinal OCT Volumes
- URL: http://arxiv.org/abs/2206.15282v1
- Date: Thu, 30 Jun 2022 13:42:09 GMT
- Title: TINC: Temporally Informed Non-Contrastive Learning for Disease
Progression Modeling in Retinal OCT Volumes
- Authors: Taha Emre, Arunava Chakravarty, Antoine Rivail, Sophie Riedl, Ursula
Schmidt-Erfurth, and Hrvoje Bogunovi\'c
- Abstract summary: Non-contrastive methods implicitly incorporate negatives in the loss, allowing different images and modalities as pairs.
We exploited already existing temporal information in a longitudinal optical coherence tomography dataset using temporally informed non-contrastive loss.
Our model outperforms existing models in predicting the risk of conversion within a time frame from intermediate age-related macular degeneration (AMD) to the late wet-AMD stage.
- Score: 4.397304270654923
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent contrastive learning methods achieved state-of-the-art in low label
regimes. However, the training requires large batch sizes and heavy
augmentations to create multiple views of an image. With non-contrastive
methods, the negatives are implicitly incorporated in the loss, allowing
different images and modalities as pairs. Although the meta-information (i.e.,
age, sex) in medical imaging is abundant, the annotations are noisy and prone
to class imbalance. In this work, we exploited already existing temporal
information (different visits from a patient) in a longitudinal optical
coherence tomography (OCT) dataset using temporally informed non-contrastive
loss (TINC) without increasing complexity and need for negative pairs.
Moreover, our novel pair-forming scheme can avoid heavy augmentations and
implicitly incorporates the temporal information in the pairs. Finally, these
representations learned from the pretraining are more successful in predicting
disease progression where the temporal information is crucial for the
downstream task. More specifically, our model outperforms existing models in
predicting the risk of conversion within a time frame from intermediate
age-related macular degeneration (AMD) to the late wet-AMD stage.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - 3DTINC: Time-Equivariant Non-Contrastive Learning for Predicting Disease Progression from Longitudinal OCTs [8.502838668378432]
We propose a new longitudinal self-supervised learning method, 3DTINC, based on non-contrastive learning.
It is designed to learn perturbation-invariant features for 3D optical coherence tomography ( OCT) volumes, using augmentations specifically designed for OCT.
Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD)
arXiv Detail & Related papers (2023-12-28T11:47:12Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - LMT: Longitudinal Mixing Training, a Framework to Predict Disease
Progression from a Single Image [1.805673949640389]
We introduce a new way to train time-aware models using $t_mix$, a weighted average time between two consecutive examinations.
We predict whether an eye would develop a severe DR in the following visit using a single image, with an AUC of 0.798 compared to baseline results of 0.641.
arXiv Detail & Related papers (2023-10-16T14:01:20Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Metadata-enhanced contrastive learning from retinal optical coherence tomography images [7.932410831191909]
We extend conventional contrastive frameworks with a novel metadata-enhanced strategy.
Our approach employs widely available patient metadata to approximate the true set of inter-image contrastive relationships.
Our approach outperforms both standard contrastive methods and a retinal image foundation model in five out of six image-level downstream tasks.
arXiv Detail & Related papers (2022-08-04T08:53:15Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Development and Validation of a Novel Prognostic Model for Predicting
AMD Progression Using Longitudinal Fundus Images [6.258161719849178]
We propose a novel deep learning method to predict the progression of diseases using longitudinal imaging data with uneven time intervals.
We demonstrate our method on a longitudinal dataset of color fundus images from 4903 eyes with age-related macular degeneration (AMD)
Our method attains a testing sensitivity of 0.878, a specificity of 0.887, and an area under the receiver operating characteristic of 0.950.
arXiv Detail & Related papers (2020-07-10T00:33:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.