DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification
- URL: http://arxiv.org/abs/2303.10610v3
- Date: Tue, 11 Jul 2023 06:50:32 GMT
- Title: DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification
- Authors: Yijun Yang, Huazhu Fu, Angelica I. Aviles-Rivero, Carola-Bibiane
Sch\"onlieb, Lei Zhu
- Abstract summary: We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
- Score: 32.67098520984195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion Probabilistic Models have recently shown remarkable performance in
generative image modeling, attracting significant attention in the computer
vision community. However, while a substantial amount of diffusion-based
research has focused on generative tasks, few studies have applied diffusion
models to general medical image classification. In this paper, we propose the
first diffusion-based model (named DiffMIC) to address general medical image
classification by eliminating unexpected noise and perturbations in medical
images and robustly capturing semantic representation. To achieve this goal, we
devise a dual conditional guidance strategy that conditions each diffusion step
with multiple granularities to improve step-wise regional attention.
Furthermore, we propose learning the mutual information in each granularity by
enforcing Maximum-Mean Discrepancy regularization during the diffusion forward
process. We evaluate the effectiveness of our DiffMIC on three medical
classification tasks with different image modalities, including placental
maturity grading on ultrasound images, skin lesion classification using
dermatoscopic images, and diabetic retinopathy grading using fundus images. Our
experimental results demonstrate that DiffMIC outperforms state-of-the-art
methods by a significant margin, indicating the universality and effectiveness
of the proposed model. Our code will be publicly available at
https://github.com/scott-yjyang/DiffMIC.
Related papers
- Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.
We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.
We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Discrepancy-based Diffusion Models for Lesion Detection in Brain MRI [1.8420387715849447]
Diffusion probabilistic models (DPMs) have exhibited significant effectiveness in computer vision tasks.
Their notable performance heavily relies on labelled datasets, which limits their application in medical images.
This paper introduces a novel framework by incorporating distinctive discrepancy features.
arXiv Detail & Related papers (2024-05-08T11:26:49Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Plug-and-Play Feature Generation for Few-Shot Medical Image
Classification [23.969183389866686]
Few-shot learning presents immense potential in enhancing model generalization and practicality for medical image classification with limited training data.
We propose MedMFG, a flexible and lightweight plug-and-play method designed to generate sufficient class-distinctive features from limited samples.
arXiv Detail & Related papers (2023-10-14T02:36:14Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - Diffusion Models for Medical Image Analysis: A Comprehensive Survey [7.272308924113656]
Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems.
Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples.
This survey intends to provide a comprehensive overview of diffusion models in the discipline of medical image analysis.
arXiv Detail & Related papers (2022-11-14T23:50:52Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.