Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization
- URL: http://arxiv.org/abs/2405.14334v2
- Date: Wed, 21 Aug 2024 13:46:18 GMT
- Title: Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization
- Authors: Yitao Peng, Lianghua He, Die Hu,
- Abstract summary: We propose a weakly supervised interpretable fundus disease localization method called hierarchical salient patch identification (HSPI)
HSPI can achieve interpretable disease localization using only image-level labels and a neural network classifier (NNC)
We conduct disease localization experiments on fundus image datasets and achieve the best performance on multiple evaluation metrics compared to previous interpretable attribution methods.
- Score: 4.714335699701277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread application of deep learning technology in medical image analysis, the effective explanation of model predictions and improvement of diagnostic accuracy have become urgent problems that need to be solved. Attribution methods have become key tools to help doctors better understand the diagnostic basis of models, and are used to explain and localize diseases in medical images. However, previous methods suffer from inaccurate and incomplete localization problems for fundus diseases with complex and diverse structures. To solve these problems, we propose a weakly supervised interpretable fundus disease localization method called hierarchical salient patch identification (HSPI) that can achieve interpretable disease localization using only image-level labels and a neural network classifier (NNC). First, we propose salient patch identification (SPI), which divides the image into several patches and optimizes consistency loss to identify which patch in the input image is most important for the network's prediction, in order to locate the disease. Second, we propose a hierarchical identification strategy to force SPI to analyze the importance of different areas to neural network classifier's prediction to comprehensively locate disease areas. Conditional peak focusing is then introduced to ensure that the mask vector can accurately locate the disease area. Finally, we propose patch selection based on multi-sized intersections to filter out incorrectly or additionally identified non-disease regions. We conduct disease localization experiments on fundus image datasets and achieve the best performance on multiple evaluation metrics compared to previous interpretable attribution methods. Additional ablation studies are conducted to verify the effectiveness of each method.
Related papers
- FeaInfNet: Diagnosis in Medical Image with Feature-Driven Inference and
Visual Explanations [4.022446255159328]
Interpretable deep learning models have received widespread attention in the field of image recognition.
Many interpretability models that have been proposed still have problems of insufficient accuracy and interpretability in medical image disease diagnosis.
We propose feature-driven inference network (FeaInfNet) to solve these problems.
arXiv Detail & Related papers (2023-12-04T13:09:00Z) - Class Attention to Regions of Lesion for Imbalanced Medical Image
Recognition [59.28732531600606]
We propose a framework named textbfClass textbfAttention to textbfREgions of the lesion (CARE) to handle data imbalance issues.
The CARE framework needs bounding boxes to represent the lesion regions of rare diseases.
Results show that the CARE variants with automated bounding box generation are comparable to the original CARE framework.
arXiv Detail & Related papers (2023-07-19T15:19:02Z) - sMRI-PatchNet: A novel explainable patch-based deep learning network for
Alzheimer's disease diagnosis and discriminative atrophy localisation with
Structural MRI [18.234996137020406]
The size of 3D high-resolution data poses a significant challenge for data analysis and processing.
The patch-based methods dividing the whole image data into several small regular patches have shown promising for more efficient sMRI-based image analysis.
This work proposes a novel patch-based deep learning network (sMRI-PatchNet) with explainable patch localisation and selection for Alzheimer disease diagnosis using sMRI.
arXiv Detail & Related papers (2023-02-17T16:01:15Z) - Unsupervised deep learning techniques for powdery mildew recognition
based on multispectral imaging [63.62764375279861]
This paper presents a deep learning approach to automatically recognize powdery mildew on cucumber leaves.
We focus on unsupervised deep learning techniques applied to multispectral imaging data.
We propose the use of autoencoder architectures to investigate two strategies for disease detection.
arXiv Detail & Related papers (2021-12-20T13:29:13Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Deep Joint Learning of Pathological Region Localization and Alzheimer's
Disease Diagnosis [4.5484714814315685]
BrainBagNet is a framework for jointly learning pathological region localization and Alzheimer's disease diagnosis.
The proposed method represents the patch-level response from whole-brain MRI scans and discriminative brain-region from position information.
In five-fold cross-validation, the classification performance of the proposed method outperformed that of the state-of-the-art methods in both AD diagnosis and mild cognitive impairment prediction tasks.
arXiv Detail & Related papers (2021-08-10T10:06:54Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z) - Weakly Supervised Thoracic Disease Localization via Disease Masks [29.065791290544983]
weakly supervised localization methods have been proposed that use only image-level annotation.
We propose a spatial attention method using disease masks that describe the areas where diseases mainly occur.
We show that the proposed method results in superior localization performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-01-25T06:52:57Z) - Explainable Disease Classification via weakly-supervised segmentation [4.154485485415009]
Deep learning approaches to Computer Aided Diagnosis (CAD) typically pose the problem as an image classification (Normal or Abnormal) problem.
This paper examines this problem and proposes an approach which mimics the clinical practice of looking for evidence prior to diagnosis.
The proposed solution is then adapted to Breast Cancer detection from mammographic images.
arXiv Detail & Related papers (2020-08-24T09:00:30Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.