MammoGANesis: Controlled Generation of High-Resolution Mammograms for
Radiology Education
- URL: http://arxiv.org/abs/2010.05177v1
- Date: Sun, 11 Oct 2020 06:47:56 GMT
- Title: MammoGANesis: Controlled Generation of High-Resolution Mammograms for
Radiology Education
- Authors: Cyril Zakka, Ghida Saheb, Elie Najem, Ghina Berjawi
- Abstract summary: We train a generative adversarial network (GAN) to synthesize 512 x 512 high-resolution mammograms.
The resulting model leads to the unsupervised separation of high-level features.
We demonstrate the model's ability to generate and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During their formative years, radiology trainees are required to interpret
hundreds of mammograms per month, with the objective of becoming apt at
discerning the subtle patterns differentiating benign from malignant lesions.
Unfortunately, medico-legal and technical hurdles make it difficult to access
and query medical images for training.
In this paper we train a generative adversarial network (GAN) to synthesize
512 x 512 high-resolution mammograms. The resulting model leads to the
unsupervised separation of high-level features (e.g. the standard mammography
views and the nature of the breast lesions), with stochastic variation in the
generated images (e.g. breast adipose tissue, calcification), enabling
user-controlled global and local attribute-editing of the synthesized images.
We demonstrate the model's ability to generate anatomically and medically
relevant mammograms by achieving an average AUC of 0.54 in a double-blind study
on four expert mammography radiologists to distinguish between generated and
real images, ascribing to the high visual quality of the synthesized and edited
mammograms, and to their potential use in advancing and facilitating medical
education.
Related papers
- Panoptic Segmentation of Mammograms with Text-To-Image Diffusion Model [1.2130800774416757]
Vision-language diffusion models demonstrated outstanding performance in image generation and transferability to various downstream tasks.
We propose leveraging pretrained features from a Stable Diffusion model as inputs to a state-of-the-art panoptic segmentation architecture.
arXiv Detail & Related papers (2024-07-19T14:04:05Z) - MAM-E: Mammographic synthetic image generation with diffusion models [0.21360081064127018]
We propose exploring the use of diffusion models for the generation of high quality full-field digital mammograms.
We introduce MAM-E, a pipeline of generative models for high quality mammography synthesis controlled by a text prompt.
arXiv Detail & Related papers (2023-11-16T11:49:49Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Can GPT-4V(ision) Serve Medical Applications? Case Studies on GPT-4V for
Multimodal Medical Diagnosis [59.35504779947686]
GPT-4V is OpenAI's newest model for multimodal medical diagnosis.
Our evaluation encompasses 17 human body systems.
GPT-4V demonstrates proficiency in distinguishing between medical image modalities and anatomy.
It faces significant challenges in disease diagnosis and generating comprehensive reports.
arXiv Detail & Related papers (2023-10-15T18:32:27Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Generation of Artificial CT Images using Patch-based Conditional
Generative Adversarial Networks [0.0]
We present an image generation approach that uses generative adversarial networks with a conditional discriminator.
We validate the feasibility of GAN-enhanced medical image generation on whole heart computed tomography (CT) images.
arXiv Detail & Related papers (2022-05-19T20:29:25Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Meta-repository of screening mammography classifiers [35.24447276237306]
We release a meta-repository containing deep learning models for classification of screening mammograms.
At its inception, our meta-repository contains five state-of-the-art models with open-source implementations.
We compare their performance on five international data sets.
arXiv Detail & Related papers (2021-08-10T17:39:26Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Synthesizing lesions using contextual GANs improves breast cancer
classification on mammograms [0.4297070083645048]
We present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms.
With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs.
arXiv Detail & Related papers (2020-05-29T21:23:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.