LeFusion: Synthesizing Myocardial Pathology on Cardiac MRI via Lesion-Focus Diffusion Models
- URL: http://arxiv.org/abs/2403.14066v1
- Date: Thu, 21 Mar 2024 01:25:39 GMT
- Title: LeFusion: Synthesizing Myocardial Pathology on Cardiac MRI via Lesion-Focus Diffusion Models
- Authors: Hantao Zhang, Jiancheng Yang, Shouhong Wan, Pascal Fua,
- Abstract summary: This study aims to mitigate these challenges through data synthesis.
Inspired by diffusion-based image inpainting, we propose LeFusion, lesion-focused diffusion models.
Our methodology employs the popular nnUNet to demonstrate that the synthetic data make it possible to effectively enhance a state-of-the-art model.
- Score: 46.59911767338791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data generated in clinical practice often exhibits biases, such as long-tail imbalance and algorithmic unfairness. This study aims to mitigate these challenges through data synthesis. Previous efforts in medical imaging synthesis have struggled with separating lesion information from background context, leading to difficulties in generating high-quality backgrounds and limited control over the synthetic output. Inspired by diffusion-based image inpainting, we propose LeFusion, lesion-focused diffusion models. By redesigning the diffusion learning objectives to concentrate on lesion areas, it simplifies the model learning process and enhance the controllability of the synthetic output, while preserving background by integrating forward-diffused background contexts into the reverse diffusion process. Furthermore, we generalize it to jointly handle multi-class lesions, and further introduce a generative model for lesion masks to increase synthesis diversity. Validated on the DE-MRI cardiac lesion segmentation dataset (Emidec), our methodology employs the popular nnUNet to demonstrate that the synthetic data make it possible to effectively enhance a state-of-the-art model. Code and model are available at https://github.com/M3DV/LeFusion.
Related papers
- Joint Holistic and Lesion Controllable Mammogram Synthesis via Gated Conditional Diffusion Model [12.360775476995169]
Gated Conditional Diffusion Model (GCDM) is a novel framework designed to jointly synthesize holistic mammogram images and localized lesions.<n>GCDM achieves precise control over small lesion areas while enhancing the realism and diversity of synthesized mammograms.
arXiv Detail & Related papers (2025-07-25T12:10:45Z) - Causal Disentanglement for Robust Long-tail Medical Image Generation [80.15257897500578]
We propose a novel medical image generation framework, which generates independent pathological and structural features.
We leverage a diffusion model guided by pathological findings to model pathological features, enabling the generation of diverse counterfactual images.
arXiv Detail & Related papers (2025-04-20T01:54:18Z) - LesionDiffusion: Towards Text-controlled General Lesion Synthesis [1.6029418399561406]
We propose LesionDiffusion, a text-controllable lesion synthesis framework for 3D CT imaging.
Our model provides greater control over lesion attributes and supports a wider variety of lesion types.
We introduce a dataset of 1,505 annotated CT scans with paired lesion masks and structured reports, covering 14 lesion types across 8 organs.
arXiv Detail & Related papers (2025-03-02T05:36:04Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details [0.0]
Multi-center neuroimaging studies face technical variability due to batch differences across sites.
Generative Adversarial Networks (GAN) has been a prominent method for addressing image harmonization tasks.
We have assessed the efficacy of the diffusion model for neuroimaging harmonization.
arXiv Detail & Related papers (2024-09-01T18:54:00Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - FDiff-Fusion:Denoising diffusion fusion network based on fuzzy learning for 3D medical image segmentation [21.882697860720803]
We propose a denoising diffusion fusion network based on fuzzy learning for 3D medical image segmentation (FDiff-Fusion)
By integrating the denoising diffusion model into the classical U-Net network, this model can effectively extract rich semantic information from input medical images.
Results show that FDiff-Fusion significantly improves the Dice scores and HD95 distance on two datasets.
arXiv Detail & Related papers (2024-07-22T02:27:01Z) - Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis [13.629617915974531]
Deformation-Recovery Diffusion Model (DRDM) is a diffusion-based generative model based on deformation diffusion and recovery.
DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution.
Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale) deformations.
arXiv Detail & Related papers (2024-07-10T01:26:48Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification [32.67098520984195]
We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-03-19T09:15:45Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.