DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification
- URL: http://arxiv.org/abs/2303.10610v3
- Date: Tue, 11 Jul 2023 06:50:32 GMT
- Title: DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification
- Authors: Yijun Yang, Huazhu Fu, Angelica I. Aviles-Rivero, Carola-Bibiane
Sch\"onlieb, Lei Zhu
- Abstract summary: We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
- Score: 32.67098520984195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion Probabilistic Models have recently shown remarkable performance in
generative image modeling, attracting significant attention in the computer
vision community. However, while a substantial amount of diffusion-based
research has focused on generative tasks, few studies have applied diffusion
models to general medical image classification. In this paper, we propose the
first diffusion-based model (named DiffMIC) to address general medical image
classification by eliminating unexpected noise and perturbations in medical
images and robustly capturing semantic representation. To achieve this goal, we
devise a dual conditional guidance strategy that conditions each diffusion step
with multiple granularities to improve step-wise regional attention.
Furthermore, we propose learning the mutual information in each granularity by
enforcing Maximum-Mean Discrepancy regularization during the diffusion forward
process. We evaluate the effectiveness of our DiffMIC on three medical
classification tasks with different image modalities, including placental
maturity grading on ultrasound images, skin lesion classification using
dermatoscopic images, and diabetic retinopathy grading using fundus images. Our
experimental results demonstrate that DiffMIC outperforms state-of-the-art
methods by a significant margin, indicating the universality and effectiveness
of the proposed model. Our code will be publicly available at
https://github.com/scott-yjyang/DiffMIC.
Related papers
- Conditional Diffusion Models are Medical Image Classifiers that Provide Explainability and Uncertainty for Free [0.7624308578421438]
This work presents the first exploration of the potential of class conditional diffusion models for 2D medical image classification.
We develop a novel majority voting scheme shown to improve the performance of medical diffusion classifiers.
Experiments on the CheXpert and ISIC Melanoma skin cancer datasets demonstrate that foundation and trained-from-scratch diffusion models achieve competitive performance.
arXiv Detail & Related papers (2025-02-06T00:37:21Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - DAug: Diffusion-based Channel Augmentation for Radiology Image Retrieval and Classification [24.68697717585541]
We propose a portable method that improves a perception model's performance with a generative model's output.
Specifically, we extend a radiology image to multiple channels, with the additional channels being the heatmaps of regions where diseases tend to develop.
Our method is motivated by the fact that generative models learn the distribution of normal and abnormal images, and such knowledge is complementary to image understanding tasks.
arXiv Detail & Related papers (2024-12-06T07:43:28Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.
We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.
We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Discrepancy-based Diffusion Models for Lesion Detection in Brain MRI [1.8420387715849447]
Diffusion probabilistic models (DPMs) have exhibited significant effectiveness in computer vision tasks.
Their notable performance heavily relies on labelled datasets, which limits their application in medical images.
This paper introduces a novel framework by incorporating distinctive discrepancy features.
arXiv Detail & Related papers (2024-05-08T11:26:49Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - Diffusion Models for Medical Image Analysis: A Comprehensive Survey [7.272308924113656]
Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems.
Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples.
This survey intends to provide a comprehensive overview of diffusion models in the discipline of medical image analysis.
arXiv Detail & Related papers (2022-11-14T23:50:52Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.