CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images
- URL: http://arxiv.org/abs/2407.13092v1
- Date: Thu, 18 Jul 2024 01:42:00 GMT
- Title: CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images
- Authors: Yuan Jin, Gege Ma, Geng Chen, Tianling Lyu, Jan Egger, Junhui Lyu, Shaoting Zhang, Wentao Zhu,
- Abstract summary: We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
- Score: 13.655407979403945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The accurate diagnosis of pathological subtypes of lung cancer is of paramount importance for follow-up treatments and prognosis managements. Assessment methods utilizing deep learning technologies have introduced novel approaches for clinical diagnosis. However, the majority of existing models rely solely on single-modality image input, leading to limited diagnostic accuracy. To this end, we propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images, i.e., CT and pathological images. The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets as well as independent CT image sets, and consequently optimize the pathology-related feature extractions from CT images. This adaptive learning approach enhances the flexibility in processing multi-dimensional and multi-modality datasets and results in performance elevating in the model testing phase. We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training, and thereby helps to explore the "gold standard" pathological information from the corresponding CT scans. To evaluate the effectiveness, adaptability, and generalization ability of our model, we conducted extensive experiments on a large-scale multi-center dataset and compared our model with a series of state-of-the-art classification models. The experimental results demonstrated the superiority of our model for lung cancer subtype classification, showcasing significant improvements in accuracy metrics such as ACC, AUC, and F1-score.
Related papers
- Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Conversion Between CT and MRI Images Using Diffusion and Score-Matching
Models [7.745729132928934]
We propose to use an emerging deep learning framework called diffusion and score-matching models.
Our results show that the diffusion and score-matching models generate better synthetic CT images than the CNN and GAN models.
Our study suggests that diffusion and score-matching models are powerful to generate high quality images conditioned on an image obtained using a complementary imaging modality.
arXiv Detail & Related papers (2022-09-24T23:50:54Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CT-SGAN: Computed Tomography Synthesis GAN [4.765541373485143]
We propose the CT-SGAN model that generates large-scale 3D synthetic CT-scan volumes when trained on a small dataset of chest CT-scans.
We show that CT-SGAN can significantly improve lung detection accuracy by pre-training a nodule on a vast amount of synthetic data.
arXiv Detail & Related papers (2021-10-14T22:20:40Z) - CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19
Patients Using Deep Learning [133.87426554801252]
We adopted an approach based on using an ensemble of deep convolutionalneural networks for segmentation of lung CT scans.
Using our models we are able to segment the lesions, evaluatepatients dynamics, estimate relative volume of lungs affected by lesions and evaluate the lung damage stage.
arXiv Detail & Related papers (2021-05-25T12:06:55Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.