Equitable Skin Disease Prediction Using Transfer Learning and Domain Adaptation
- URL: http://arxiv.org/abs/2409.00873v1
- Date: Sun, 1 Sep 2024 23:48:26 GMT
- Title: Equitable Skin Disease Prediction Using Transfer Learning and Domain Adaptation
- Authors: Sajib Acharjee Dip, Kazi Hasan Ibn Arif, Uddip Acharjee Shuvo, Ishtiaque Ahmed Khan, Na Meng,
- Abstract summary: Existing artificial intelligence (AI) models in dermatology face challenges in accurately diagnosing diseases across diverse skin tones.
We employ a transfer-learning approach that capitalizes on the rich, transferable knowledge from various image domains.
Among all methods, Med-ViT emerged as the top performer due to its comprehensive feature representation learned from diverse image sources.
- Score: 1.9505972437091028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of dermatology, the complexity of diagnosing skin conditions manually necessitates the expertise of dermatologists. Accurate identification of various skin ailments, ranging from cancer to inflammatory diseases, is paramount. However, existing artificial intelligence (AI) models in dermatology face challenges, particularly in accurately diagnosing diseases across diverse skin tones, with a notable performance gap in darker skin. Additionally, the scarcity of publicly available, unbiased datasets hampers the development of inclusive AI diagnostic tools. To tackle the challenges in accurately predicting skin conditions across diverse skin tones, we employ a transfer-learning approach that capitalizes on the rich, transferable knowledge from various image domains. Our method integrates multiple pre-trained models from a wide range of sources, including general and specific medical images, to improve the robustness and inclusiveness of the skin condition predictions. We rigorously evaluated the effectiveness of these models using the Diverse Dermatology Images (DDI) dataset, which uniquely encompasses both underrepresented and common skin tones, making it an ideal benchmark for assessing our approach. Among all methods, Med-ViT emerged as the top performer due to its comprehensive feature representation learned from diverse image sources. To further enhance performance, we conducted domain adaptation using additional skin image datasets such as HAM10000. This adaptation significantly improved model performance across all models.
Related papers
- FairSkin: Fair Diffusion for Skin Disease Image Generation [54.29840149709033]
Diffusion Model (DM) has become a leading method in generating synthetic medical images, but it suffers from a critical twofold bias.
We propose FairSkin, a novel DM framework that mitigates these biases through a three-level resampling mechanism.
Our approach significantly improves the diversity and quality of generated images, contributing to more equitable skin disease detection in clinical settings.
arXiv Detail & Related papers (2024-10-29T21:37:03Z) - A General-Purpose Multimodal Foundation Model for Dermatology [14.114262475562846]
PanDerm is a multimodal dermatology foundation model pretrained through self-supervised learning on a dataset of over 2 million real-world images of skin diseases.
PanDerm achieved state-of-the-art performance across all evaluated tasks.
PanDerm could enhance the management of skin diseases and serve as a model for developing multimodal foundation models in other medical specialties.
arXiv Detail & Related papers (2024-10-19T08:48:01Z) - S-SYNTH: Knowledge-Based, Synthetic Generation of Skin Images [2.79604239303318]
We propose S-SYNTH, the first knowledge-based, adaptable open-source skin simulation framework.
We generate synthetic skin, 3D models and digitally rendered images using an anatomically inspired multi-layer, multi-representation skin and growing lesion model.
We show that results obtained using synthetic data follow similar comparative trends as real dermatologic images.
arXiv Detail & Related papers (2024-07-31T23:16:29Z) - SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language Models [52.90397538472582]
SkinGEN is a diagnosis-to-generation framework that generates reference demonstrations from diagnosis results provided by VLM.
We conduct a user study with 32 participants evaluating both the system performance and explainability.
Results demonstrate that SkinGEN significantly improves users' comprehension of VLM predictions and fosters increased trust in the diagnostic process.
arXiv Detail & Related papers (2024-04-23T05:36:33Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - DDI-CoCo: A Dataset For Understanding The Effect Of Color Contrast In
Machine-Assisted Skin Disease Detection [51.92255321684027]
We study the interaction between skin tone and color difference effects and suggest that color difference can be an additional reason behind model performance bias between skin tones.
Our work provides a complementary angle to dermatology AI for improving skin disease detection.
arXiv Detail & Related papers (2024-01-24T07:45:24Z) - A Novel Multi-Task Model Imitating Dermatologists for Accurate
Differential Diagnosis of Skin Diseases in Clinical Images [27.546559936765863]
A novel multi-task model, namely DermImitFormer, is proposed to fill this gap by imitating dermatologists' diagnostic procedures and strategies.
The model simultaneously predicts body parts and lesion attributes in addition to the disease itself, enhancing diagnosis accuracy and improving diagnosis interpretability.
arXiv Detail & Related papers (2023-07-17T08:05:30Z) - Improving Deep Facial Phenotyping for Ultra-rare Disorder Verification
Using Model Ensembles [52.77024349608834]
We analyze the influence of replacing a DCNN with a state-of-the-art face recognition approach, iResNet with ArcFace.
Our proposed ensemble model achieves state-of-the-art performance on both seen and unseen disorders.
arXiv Detail & Related papers (2022-11-12T23:28:54Z) - Automatic Facial Skin Feature Detection for Everyone [60.31670960526022]
We present an automatic facial skin feature detection method that works across a variety of skin tones and age groups for selfies in the wild.
To be specific, we annotate the locations of acne, pigmentation, and wrinkle for selfie images with different skin tone colors, severity levels, and lighting conditions.
arXiv Detail & Related papers (2022-03-30T04:52:54Z) - Disparities in Dermatology AI Performance on a Diverse, Curated Clinical
Image Set [10.212881174103996]
We show that state-of-the-art AI models perform substantially worse on Diverse Dermatology Images dataset.
We find that dermatologists, who typically provide visual labels for AI training and test datasets, also perform worse on images of dark skin tones and uncommon diseases.
arXiv Detail & Related papers (2022-03-15T20:33:23Z) - Disparities in Dermatology AI: Assessments Using Diverse Clinical Images [9.767299882513825]
We show that state-of-the-art dermatology AI models perform substantially worse on Diverse Dermatology Images dataset.
We find that dark skin tones and uncommon diseases, which are well represented in the DDI dataset, lead to performance drop-offs.
arXiv Detail & Related papers (2021-11-15T07:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.