FeaInfNet: Diagnosis in Medical Image with Feature-Driven Inference and
Visual Explanations
- URL: http://arxiv.org/abs/2312.01871v1
- Date: Mon, 4 Dec 2023 13:09:00 GMT
- Title: FeaInfNet: Diagnosis in Medical Image with Feature-Driven Inference and
Visual Explanations
- Authors: Yitao Peng, Lianghua He, Die Hu, Yihang Liu, Longzhen Yang, Shaohua
Shang
- Abstract summary: Interpretable deep learning models have received widespread attention in the field of image recognition.
Many interpretability models that have been proposed still have problems of insufficient accuracy and interpretability in medical image disease diagnosis.
We propose feature-driven inference network (FeaInfNet) to solve these problems.
- Score: 4.022446255159328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretable deep learning models have received widespread attention in the
field of image recognition. Due to the unique multi-instance learning of
medical images and the difficulty in identifying decision-making regions, many
interpretability models that have been proposed still have problems of
insufficient accuracy and interpretability in medical image disease diagnosis.
To solve these problems, we propose feature-driven inference network
(FeaInfNet). Our first key innovation involves proposing a feature-based
network reasoning structure, which is applied to FeaInfNet. The network of this
structure compares the similarity of each sub-region image patch with the
disease templates and normal templates that may appear in the region, and
finally combines the comparison of each sub-region to make the final diagnosis.
It simulates the diagnosis process of doctors to make the model interpretable
in the reasoning process, while avoiding the misleading caused by the
participation of normal areas in reasoning. Secondly, we propose local feature
masks (LFM) to extract feature vectors in order to provide global information
for these vectors, thus enhancing the expressive ability of the FeaInfNet.
Finally, we propose adaptive dynamic masks (Adaptive-DM) to interpret feature
vectors and prototypes into human-understandable image patches to provide
accurate visual interpretation. We conducted qualitative and quantitative
experiments on multiple publicly available medical datasets, including RSNA,
iChallenge-PM, Covid-19, ChinaCXRSet, and MontgomerySet. The results of our
experiments validate that our method achieves state-of-the-art performance in
terms of classification accuracy and interpretability compared to baseline
methods in medical image diagnosis. Additional ablation studies verify the
effectiveness of each of our proposed components.
Related papers
- Advancing Medical Image Segmentation: Morphology-Driven Learning with Diffusion Transformer [4.672688418357066]
We propose a novel Transformer Diffusion (DTS) model for robust segmentation in the presence of noise.
Our model, which analyzes the morphological representation of images, shows better results than the previous models in various medical imaging modalities.
arXiv Detail & Related papers (2024-08-01T07:35:54Z) - Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization [4.714335699701277]
We propose a weakly supervised interpretable fundus disease localization method called hierarchical salient patch identification (HSPI)
HSPI can achieve interpretable disease localization using only image-level labels and a neural network classifier (NNC)
We conduct disease localization experiments on fundus image datasets and achieve the best performance on multiple evaluation metrics compared to previous interpretable attribution methods.
arXiv Detail & Related papers (2024-05-23T09:07:21Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - VALD-MD: Visual Attribution via Latent Diffusion for Medical Diagnostics [0.0]
Visual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image.
We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models.
The resulting system also exhibits a range of latent capabilities including zero-shot localized disease induction.
arXiv Detail & Related papers (2024-01-02T19:51:49Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Exploiting Causality Signals in Medical Images: A Pilot Study with
Empirical Results [1.2400966570867322]
We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes.
This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image.
Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene.
arXiv Detail & Related papers (2023-09-19T08:00:26Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.