MAM-E: Mammographic synthetic image generation with diffusion models
- URL: http://arxiv.org/abs/2311.09822v1
- Date: Thu, 16 Nov 2023 11:49:49 GMT
- Title: MAM-E: Mammographic synthetic image generation with diffusion models
- Authors: Ricardo Montoya-del-Angel, Karla Sam-Millan, Joan C Vilanova, Robert
Mart\'i
- Abstract summary: We propose exploring the use of diffusion models for the generation of high quality full-field digital mammograms.
We introduce MAM-E, a pipeline of generative models for high quality mammography synthesis controlled by a text prompt.
- Score: 0.21360081064127018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are used as an alternative data augmentation technique to
alleviate the data scarcity problem faced in the medical imaging field.
Diffusion models have gathered special attention due to their innovative
generation approach, the high quality of the generated images and their
relatively less complex training process compared with Generative Adversarial
Networks. Still, the implementation of such models in the medical domain
remains at early stages. In this work, we propose exploring the use of
diffusion models for the generation of high quality full-field digital
mammograms using state-of-the-art conditional diffusion pipelines.
Additionally, we propose using stable diffusion models for the inpainting of
synthetic lesions on healthy mammograms. We introduce MAM-E, a pipeline of
generative models for high quality mammography synthesis controlled by a text
prompt and capable of generating synthetic lesions on specific regions of the
breast. Finally, we provide quantitative and qualitative assessment of the
generated images and easy-to-use graphical user interfaces for mammography
synthesis.
Related papers
- Synthetic Augmentation for Anatomical Landmark Localization using DDPMs [0.22499166814992436]
diffusion-based generative models have recently started to gain attention for their ability to generate high-quality synthetic images.
We propose a novel way to assess the quality of the generated images using a Markov Random Field (MRF) model for landmark matching and a Statistical Shape Model (SSM) to check landmark plausibility.
arXiv Detail & Related papers (2024-10-16T12:09:38Z) - Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis [62.06970466554273]
We present Meissonic, which non-autoregressive masked image modeling (MIM) text-to-image elevates to a level comparable with state-of-the-art diffusion models like SDXL.
We leverage high-quality training data, integrate micro-conditions informed by human preference scores, and employ feature compression layers to further enhance image fidelity and resolution.
Our model not only matches but often exceeds the performance of existing models like SDXL in generating high-quality, high-resolution images.
arXiv Detail & Related papers (2024-10-10T17:59:17Z) - Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - YaART: Yet Another ART Rendering Technology [119.09155882164573]
This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences.
We analyze how these choices affect both the efficiency of the training process and the quality of the generated images.
We demonstrate that models trained on smaller datasets of higher-quality images can successfully compete with those trained on larger datasets.
arXiv Detail & Related papers (2024-04-08T16:51:19Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Knowledge-based in silico models and dataset for the comparative
evaluation of mammography AI for a range of breast characteristics, lesion
conspicuities and doses [2.9362519537872647]
We release M-SYNTH, a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels.
We find that model performance decreases with increasing breast density and increases with higher mass density, as expected.
As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.
arXiv Detail & Related papers (2023-10-27T21:14:30Z) - Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with
Visual and Textual Prompts [27.222844687360823]
We adapt the latest advance in the generative model, with the added control flow using lesion-specific visual and textual prompts.
It can achieve a 9% increase in the SSIM image quality measure and an over 5% increase in Dice coefficients over the prior arts.
arXiv Detail & Related papers (2023-10-04T15:43:26Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.