Continual Self-supervised Learning Considering Medical Domain Knowledge in Chest CT Images
- URL: http://arxiv.org/abs/2501.04217v1
- Date: Wed, 08 Jan 2025 01:27:35 GMT
- Title: Continual Self-supervised Learning Considering Medical Domain Knowledge in Chest CT Images
- Authors: Ren Tasai, Guang Li, Ren Togo, Minghui Tang, Takaaki Yoshimura, Hiroyuki Sugimori, Kenji Hirata, Takahiro Ogawa, Kohsuke Kudo, Miki Haseyama,
- Abstract summary: We propose a novel continual self-supervised learning method (CSSL) considering medical domain knowledge in chest CT images.
Our approach addresses the challenge of sequential learning by effectively capturing the relationship between previously learned knowledge and new information at different stages.
We validate our method using chest CT images obtained under two different imaging conditions, demonstrating superior performance compared to state-of-the-art methods.
- Score: 36.88692059388115
- License:
- Abstract: We propose a novel continual self-supervised learning method (CSSL) considering medical domain knowledge in chest CT images. Our approach addresses the challenge of sequential learning by effectively capturing the relationship between previously learned knowledge and new information at different stages. By incorporating an enhanced DER into CSSL and maintaining both diversity and representativeness within the rehearsal buffer of DER, the risk of data interference during pretraining is reduced, enabling the model to learn more richer and robust feature representations. In addition, we incorporate a mixup strategy and feature distillation to further enhance the model's ability to learn meaningful representations. We validate our method using chest CT images obtained under two different imaging conditions, demonstrating superior performance compared to state-of-the-art methods.
Related papers
- MultiEYE: Dataset and Benchmark for OCT-Enhanced Retinal Disease Recognition from Fundus Images [4.885485496458059]
We present the first large multi-modal multi-class dataset for eye disease diagnosis, MultiEYE.
We propose an OCT-assisted Conceptual Distillation Approach ( OCT-CoDA) to extract disease-related knowledge from OCT images.
Our proposed OCT-CoDA demonstrates remarkable results and interpretability, showing great potential for clinical application.
arXiv Detail & Related papers (2024-12-12T16:08:43Z) - CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Tissue-Contrastive Semi-Masked Autoencoders for Segmentation Pretraining on Chest CT [10.40407976789742]
We propose a new MIM method named Tissue-Contrastive Semi-Masked Autoencoder (TCS-MAE) for modeling chest CT images.
Our method has two novel designs: 1) a tissue-based masking-reconstruction strategy to capture more fine-grained anatomical features, and 2) a dual-AE architecture with contrastive learning between the masked and original image views.
arXiv Detail & Related papers (2024-07-12T03:24:17Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Seeking an Optimal Approach for Computer-Aided Pulmonary Embolism
Detection [7.969404878464232]
Pulmonary embolism (PE) represents a thrombus ("blood clot"), that travels to the blood vessels in the lung, causing vascular obstruction and in some patients, death.
Deep learning holds great promise for the computer-aided diagnosis (CAD) of PE.
arXiv Detail & Related papers (2021-09-15T00:21:23Z) - Preservational Learning Improves Self-supervised Medical Image Models by
Reconstructing Diverse Contexts [58.53111240114021]
We present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations.
PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
arXiv Detail & Related papers (2021-09-09T16:05:55Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.