MAM-E: Mammographic synthetic image generation with diffusion models
- URL: http://arxiv.org/abs/2311.09822v1
- Date: Thu, 16 Nov 2023 11:49:49 GMT
- Title: MAM-E: Mammographic synthetic image generation with diffusion models
- Authors: Ricardo Montoya-del-Angel, Karla Sam-Millan, Joan C Vilanova, Robert
Mart\'i
- Abstract summary: We propose exploring the use of diffusion models for the generation of high quality full-field digital mammograms.
We introduce MAM-E, a pipeline of generative models for high quality mammography synthesis controlled by a text prompt.
- Score: 0.21360081064127018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are used as an alternative data augmentation technique to
alleviate the data scarcity problem faced in the medical imaging field.
Diffusion models have gathered special attention due to their innovative
generation approach, the high quality of the generated images and their
relatively less complex training process compared with Generative Adversarial
Networks. Still, the implementation of such models in the medical domain
remains at early stages. In this work, we propose exploring the use of
diffusion models for the generation of high quality full-field digital
mammograms using state-of-the-art conditional diffusion pipelines.
Additionally, we propose using stable diffusion models for the inpainting of
synthetic lesions on healthy mammograms. We introduce MAM-E, a pipeline of
generative models for high quality mammography synthesis controlled by a text
prompt and capable of generating synthetic lesions on specific regions of the
breast. Finally, we provide quantitative and qualitative assessment of the
generated images and easy-to-use graphical user interfaces for mammography
synthesis.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Multibranch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multibranch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Knowledge-based in silico models and dataset for the comparative
evaluation of mammography AI for a range of breast characteristics, lesion
conspicuities and doses [2.9362519537872647]
We release M-SYNTH, a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels.
We find that model performance decreases with increasing breast density and increases with higher mass density, as expected.
As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.
arXiv Detail & Related papers (2023-10-27T21:14:30Z) - Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with
Visual and Textual Prompts [27.222844687360823]
We adapt the latest advance in the generative model, with the added control flow using lesion-specific visual and textual prompts.
It can achieve a 9% increase in the SSIM image quality measure and an over 5% increase in Dice coefficients over the prior arts.
arXiv Detail & Related papers (2023-10-04T15:43:26Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.