MAM-E: Mammographic synthetic image generation with diffusion models
- URL: http://arxiv.org/abs/2311.09822v1
- Date: Thu, 16 Nov 2023 11:49:49 GMT
- Title: MAM-E: Mammographic synthetic image generation with diffusion models
- Authors: Ricardo Montoya-del-Angel, Karla Sam-Millan, Joan C Vilanova, Robert
Mart\'i
- Abstract summary: We propose exploring the use of diffusion models for the generation of high quality full-field digital mammograms.
We introduce MAM-E, a pipeline of generative models for high quality mammography synthesis controlled by a text prompt.
- Score: 0.21360081064127018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are used as an alternative data augmentation technique to
alleviate the data scarcity problem faced in the medical imaging field.
Diffusion models have gathered special attention due to their innovative
generation approach, the high quality of the generated images and their
relatively less complex training process compared with Generative Adversarial
Networks. Still, the implementation of such models in the medical domain
remains at early stages. In this work, we propose exploring the use of
diffusion models for the generation of high quality full-field digital
mammograms using state-of-the-art conditional diffusion pipelines.
Additionally, we propose using stable diffusion models for the inpainting of
synthetic lesions on healthy mammograms. We introduce MAM-E, a pipeline of
generative models for high quality mammography synthesis controlled by a text
prompt and capable of generating synthetic lesions on specific regions of the
breast. Finally, we provide quantitative and qualitative assessment of the
generated images and easy-to-use graphical user interfaces for mammography
synthesis.
Related papers
- DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception [66.88792390480343]
We propose DEEM, a simple and effective approach that utilizes the generative feedback of diffusion models to align the semantic distributions of the image encoder.
DEEM exhibits enhanced robustness and a superior capacity to alleviate hallucinations while utilizing fewer trainable parameters, less pre-training data, and a smaller base model size.
arXiv Detail & Related papers (2024-05-24T05:46:04Z) - Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Joint Reconstruction [42.95604565673447]
This paper presents a proof-of-concept for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/computed tomography (CT) datasets.
Despite challenges such as patch decomposition and model limitations, our results underscore the potential of generative models for enhancing medical imaging reconstruction.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - LeFusion: Synthesizing Myocardial Pathology on Cardiac MRI via Lesion-Focus Diffusion Models [46.59911767338791]
This study aims to mitigate these challenges through data synthesis.
Inspired by diffusion-based image inpainting, we propose LeFusion, lesion-focused diffusion models.
Our methodology employs the popular nnUNet to demonstrate that the synthetic data make it possible to effectively enhance a state-of-the-art model.
arXiv Detail & Related papers (2024-03-21T01:25:39Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Knowledge-based in silico models and dataset for the comparative
evaluation of mammography AI for a range of breast characteristics, lesion
conspicuities and doses [2.9362519537872647]
We release M-SYNTH, a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels.
We find that model performance decreases with increasing breast density and increases with higher mass density, as expected.
As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.
arXiv Detail & Related papers (2023-10-27T21:14:30Z) - Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with
Visual and Textual Prompts [27.222844687360823]
We adapt the latest advance in the generative model, with the added control flow using lesion-specific visual and textual prompts.
It can achieve a 9% increase in the SSIM image quality measure and an over 5% increase in Dice coefficients over the prior arts.
arXiv Detail & Related papers (2023-10-04T15:43:26Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - MammoGANesis: Controlled Generation of High-Resolution Mammograms for
Radiology Education [0.0]
We train a generative adversarial network (GAN) to synthesize 512 x 512 high-resolution mammograms.
The resulting model leads to the unsupervised separation of high-level features.
We demonstrate the model's ability to generate and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study.
arXiv Detail & Related papers (2020-10-11T06:47:56Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.