A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images
- URL: http://arxiv.org/abs/2404.04887v1
- Date: Sun, 7 Apr 2024 09:08:14 GMT
- Title: A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images
- Authors: Qingshan Hou, Shuai Cheng, Peng Cao, Jinzhu Yang, Xiaoli Liu, Osmar R. Zaiane, Yih Chung Tham,
- Abstract summary: Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation.
We propose a clinical-oriented multi-level CL framework that aims to enhance the model's capacity to extract lesion features.
The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray.
- Score: 4.576524795036682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning offers a conduit to elucidate distinctive features within the latent space and interpret the deep models. However, the randomness of lesion distribution and the complexity of low-quality factors in medical images pose great challenges for models to extract key lesion features. Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation. Nevertheless, the effectiveness of CL is highly dependent on the quality of the positive and negative sample pairs. In this work, we propose a clinical-oriented multi-level CL framework that aims to enhance the model's capacity to extract lesion features and discriminate between lesion and low-quality factors, thereby enabling more accurate disease diagnosis from low-quality medical images. Specifically, we first construct multi-level positive and negative pairs to enhance the model's comprehensive recognition capability of lesion features by integrating information from different levels and qualities of medical images. Moreover, to improve the quality of the learned lesion embeddings, we introduce a dynamic hard sample mining method based on self-paced learning. The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray, demonstrating superior performance compared to other state-of-the-art disease diagnostic methods.
Related papers
- Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and Interpretation [15.303610605543746]
We present a novel Cross- and Intra-image Prototypical Learning framework, for accurate multi-label disease diagnosis and interpretation from medical images.
We propose a new two-level alignment-based regularisation strategy that effectively leverages consistent intra-image information to enhance interpretation robustness and predictive performance.
arXiv Detail & Related papers (2024-11-07T10:46:01Z) - FairSkin: Fair Diffusion for Skin Disease Image Generation [54.29840149709033]
Diffusion Model (DM) has become a leading method in generating synthetic medical images, but it suffers from a critical twofold bias.
We propose FairSkin, a novel DM framework that mitigates these biases through a three-level resampling mechanism.
Our approach significantly improves the diversity and quality of generated images, contributing to more equitable skin disease detection in clinical settings.
arXiv Detail & Related papers (2024-10-29T21:37:03Z) - Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning [5.660131312162423]
This paper proposes a novel Cross-Graph Modal Contrastive Learning framework for multimodal medical image classification.
The proposed approach is evaluated on two datasets: a Parkinson's disease (PD) dataset and a public melanoma dataset.
Results demonstrate that CGMCL outperforms conventional unimodal methods in accuracy, interpretability, and early disease prediction.
arXiv Detail & Related papers (2024-10-23T01:25:25Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - A Novel Multi-Task Model Imitating Dermatologists for Accurate
Differential Diagnosis of Skin Diseases in Clinical Images [27.546559936765863]
A novel multi-task model, namely DermImitFormer, is proposed to fill this gap by imitating dermatologists' diagnostic procedures and strategies.
The model simultaneously predicts body parts and lesion attributes in addition to the disease itself, enhancing diagnosis accuracy and improving diagnosis interpretability.
arXiv Detail & Related papers (2023-07-17T08:05:30Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Image Quality-aware Diagnosis via Meta-knowledge Co-embedding [11.14366093273983]
We propose a novel meta-knowledge co-embedding network, consisting of twos: Task Net and Meta Learner.
Task Net constructs an explicit quality information utilization mechanism to enhance diagnosis via knowledge co-embedding features.
Meta Learner ensures the effectiveness and constrains the semantics of these features via meta-learning and joint-encoding masking.
arXiv Detail & Related papers (2023-03-27T09:35:44Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.