Illumination-based Transformations Improve Skin Lesion Segmentation in
Dermoscopic Images
- URL: http://arxiv.org/abs/2003.10111v1
- Date: Mon, 23 Mar 2020 07:43:35 GMT
- Title: Illumination-based Transformations Improve Skin Lesion Segmentation in
Dermoscopic Images
- Authors: Kumar Abhishek, Ghassan Hamarneh, and Mark S. Drew
- Abstract summary: We propose the first deep semantic segmentation framework for dermoscopic images which incorporates, along with the original RGB images, information extracted using the physics of skin illumination and imaging.
We evaluate our method on three datasets: the ISIC 2017 Skin Lesion Challenge, the DermoFit Image Library, and the PH2 dataset.
- Score: 17.60847055233247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The semantic segmentation of skin lesions is an important and common initial
task in the computer aided diagnosis of dermoscopic images. Although deep
learning-based approaches have considerably improved the segmentation accuracy,
there is still room for improvement by addressing the major challenges, such as
variations in lesion shape, size, color and varying levels of contrast. In this
work, we propose the first deep semantic segmentation framework for dermoscopic
images which incorporates, along with the original RGB images, information
extracted using the physics of skin illumination and imaging. In particular, we
incorporate information from specific color bands, illumination invariant
grayscale images, and shading-attenuated images. We evaluate our method on
three datasets: the ISBI ISIC 2017 Skin Lesion Segmentation Challenge dataset,
the DermoFit Image Library, and the PH2 dataset and observe improvements of
12.02%, 4.30%, and 8.86% respectively in the mean Jaccard index over a baseline
model trained only with RGB images.
Related papers
- Unsupervised Skin Lesion Segmentation via Structural Entropy
Minimization on Multi-Scale Superpixel Graphs [59.19218582436495]
We propose an unsupervised Skin Lesion sEgmentation framework based on structural entropy and isolation forest outlier Detection, namely SLED.
Skin lesions are segmented by minimizing the structural entropy of a superpixel graph constructed from the dermoscopic image.
We characterize the consistency of healthy skin features and devise a novel multi-scale segmentation mechanism by outlier detection, which enhances the segmentation accuracy by leveraging the superpixel features from multiple scales.
arXiv Detail & Related papers (2023-09-05T02:15:51Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement [69.47143451986067]
Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions.
We propose a novel semantic-aware knowledge-guided framework that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
arXiv Detail & Related papers (2023-04-14T10:22:28Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Color Invariant Skin Segmentation [17.501659517108884]
This paper addresses the problem of automatically detecting human skin in images without reliance on color information.
A primary motivation of the work has been to achieve results that are consistent across the full range of skin tones.
We present a new approach that performs well in the absence of such information.
arXiv Detail & Related papers (2022-04-21T05:07:21Z) - Saliency-based segmentation of dermoscopic images using color
information [3.8073142980733]
This paper investigates how color information, besides saliency, can be used to determine the pigmented lesion region automatically.
We propose a novel method employing a binarization process coupled with new perceptual criteria, inspired by the human visual perception.
We have assessed the method on two public databases, including 1497 dermoscopic images.
arXiv Detail & Related papers (2020-11-26T08:47:10Z) - Leveraging Adaptive Color Augmentation in Convolutional Neural Networks
for Deep Skin Lesion Segmentation [0.0]
We propose an adaptive color augmentation technique to amplify data expression and model performance.
We qualitatively identify and verify the semantic structural features learned by the network for discriminating skin lesions against normal skin tissue.
arXiv Detail & Related papers (2020-10-31T00:16:23Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z) - Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning [31.254432319814864]
This study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks.
By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information is preserved in color-normalized images.
arXiv Detail & Related papers (2020-07-24T15:30:19Z) - Bridging the gap between Natural and Medical Images through Deep
Colorization [15.585095421320922]
Transfer learning from natural image collections is a standard practice that attempts to tackle shape, texture and color discrepancies.
In this work, we propose to disentangle those challenges and design a dedicated network module that focuses on color adaptation.
We combine learning from scratch of the color module with transfer learning of different classification backbones, obtaining an end-to-end, easy-to-train architecture for diagnostic image recognition.
arXiv Detail & Related papers (2020-05-21T12:03:14Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.