CIFF-Net: Contextual Image Feature Fusion for Melanoma Diagnosis
- URL: http://arxiv.org/abs/2303.03672v1
- Date: Tue, 7 Mar 2023 06:16:10 GMT
- Title: CIFF-Net: Contextual Image Feature Fusion for Melanoma Diagnosis
- Authors: Md Awsafur Rahman, Bishmoy Paul, Tanvir Mahmud and Shaikh Anowarul
Fattah
- Abstract summary: Melanoma is considered to be the deadliest variant of skin cancer causing around 75% of total skin cancer deaths.
To diagnose Melanoma, clinicians assess and compare multiple skin lesions of the same patient concurrently.
This concurrent multi-image comparative method has not been explored by existing deep learning-based schemes.
- Score: 0.4129225533930966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Melanoma is considered to be the deadliest variant of skin cancer causing
around 75\% of total skin cancer deaths. To diagnose Melanoma, clinicians
assess and compare multiple skin lesions of the same patient concurrently to
gather contextual information regarding the patterns, and abnormality of the
skin. So far this concurrent multi-image comparative method has not been
explored by existing deep learning-based schemes. In this paper, based on
contextual image feature fusion (CIFF), a deep neural network (CIFF-Net) is
proposed, which integrates patient-level contextual information into the
traditional approaches for improved Melanoma diagnosis by concurrent
multi-image comparative method. The proposed multi-kernel self attention (MKSA)
module offers better generalization of the extracted features by introducing
multi-kernel operations in the self attention mechanisms. To utilize both self
attention and contextual feature-wise attention, an attention guided module
named contextual feature fusion (CFF) is proposed that integrates extracted
features from different contextual images into a single feature vector.
Finally, in comparative contextual feature fusion (CCFF) module, primary and
contextual features are compared concurrently to generate comparative features.
Significant improvement in performance has been achieved on the ISIC-2020
dataset over the traditional approaches that validate the effectiveness of the
proposed contextual learning scheme.
Related papers
- Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning [5.660131312162423]
This paper proposes a novel Cross-Graph Modal Contrastive Learning framework for multimodal medical image classification.
The proposed approach is evaluated on two datasets: a Parkinson's disease (PD) dataset and a public melanoma dataset.
Results demonstrate that CGMCL outperforms conventional unimodal methods in accuracy, interpretability, and early disease prediction.
arXiv Detail & Related papers (2024-10-23T01:25:25Z) - Multiscale Color Guided Attention Ensemble Classifier for Age-Related Macular Degeneration using Concurrent Fundus and Optical Coherence Tomography Images [1.159256777373941]
This paper proposes a modality-specific multiscale color space embedding integrated with the attention mechanism based on transfer learning for classification.
To analyze the performance of the proposed MCGAEc method, a publicly available multi-modality dataset from Project Macula for AMD is utilized and compared with the existing models.
arXiv Detail & Related papers (2024-09-01T13:17:45Z) - Pay Less On Clinical Images: Asymmetric Multi-Modal Fusion Method For Efficient Multi-Label Skin Lesion Classification [6.195015783344803]
Existing multi-modal approaches primarily focus on enhancing multi-label skin lesion classification performance through advanced fusion modules.
We introduce a novel asymmetric multi-modal fusion method in this paper for efficient multi-label skin lesion classification.
arXiv Detail & Related papers (2024-07-13T20:46:04Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Joint-Individual Fusion Structure with Fusion Attention Module for
Multi-Modal Skin Cancer Classification [10.959827268372422]
We propose a new fusion method that combines dermatological images and patient metadata for skin cancer classification.
First, we propose a joint-individual fusion (JIF) structure that learns the shared features of multi-modality data.
Second, we introduce a fusion attention (FA) module that enhances the most relevant image and metadata features.
arXiv Detail & Related papers (2023-12-07T10:16:21Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Bi-directional Dermoscopic Feature Learning and Multi-scale Consistent
Decision Fusion for Skin Lesion Segmentation [28.300486641368234]
We propose a novel bi-directional dermoscopic feature learning (biDFL) framework to model the complex correlation between skin lesions and their informative context.
We also propose a multi-scale consistent decision fusion (mCDF) that is capable of selectively focusing on the informative decisions generated from multiple classification layers.
arXiv Detail & Related papers (2020-02-20T12:00:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.