Lesion-Aware Generative Artificial Intelligence for Virtual Contrast-Enhanced Mammography in Breast Cancer
- URL: http://arxiv.org/abs/2505.03018v1
- Date: Mon, 05 May 2025 20:41:30 GMT
- Title: Lesion-Aware Generative Artificial Intelligence for Virtual Contrast-Enhanced Mammography in Breast Cancer
- Authors: Aurora Rofena, Arianna Manchia, Claudia Lucia Piccolo, Bruno Beomonte Zobel, Paolo Soda, Valerio Guarrasi,
- Abstract summary: Contrast-Enhanced Spectral Mammography (CESM) improves lesion visibility through the administration of an iodinated contrast agent.<n>CESM offers superior diagnostic accuracy compared to standard mammography, but its use entails higher radiation exposure and potential side effects.<n>We propose Seg-CycleGAN, a generative deep learning framework for Virtual Contrast Enhancement in CESM.
- Score: 0.8224504196003954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrast-Enhanced Spectral Mammography (CESM) is a dual-energy mammographic technique that improves lesion visibility through the administration of an iodinated contrast agent. It acquires both a low-energy image, comparable to standard mammography, and a high-energy image, which are then combined to produce a dual-energy subtracted image highlighting lesion contrast enhancement. While CESM offers superior diagnostic accuracy compared to standard mammography, its use entails higher radiation exposure and potential side effects associated with the contrast medium. To address these limitations, we propose Seg-CycleGAN, a generative deep learning framework for Virtual Contrast Enhancement in CESM. The model synthesizes high-fidelity dual-energy subtracted images from low-energy images, leveraging lesion segmentation maps to guide the generative process and improve lesion reconstruction. Building upon the standard CycleGAN architecture, Seg-CycleGAN introduces localized loss terms focused on lesion areas, enhancing the synthesis of diagnostically relevant regions. Experiments on the CESM@UCBM dataset demonstrate that Seg-CycleGAN outperforms the baseline in terms of PSNR and SSIM, while maintaining competitive MSE and VIF. Qualitative evaluations further confirm improved lesion fidelity in the generated images. These results suggest that segmentation-aware generative models offer a viable pathway toward contrast-free CESM alternatives.
Related papers
- Joint Holistic and Lesion Controllable Mammogram Synthesis via Gated Conditional Diffusion Model [12.360775476995169]
Gated Conditional Diffusion Model (GCDM) is a novel framework designed to jointly synthesize holistic mammogram images and localized lesions.<n>GCDM achieves precise control over small lesion areas while enhancing the realism and diversity of synthesized mammograms.
arXiv Detail & Related papers (2025-07-25T12:10:45Z) - Direct Dual-Energy CT Material Decomposition using Model-based Denoising Diffusion Model [105.95160543743984]
We propose a deep learning procedure called Dual-Energy Decomposition Model-based Diffusion (DEcomp-MoD) for quantitative material decomposition.<n>We show that DEcomp-MoD outperform state-of-the-art unsupervised score-based model and supervised deep learning networks.
arXiv Detail & Related papers (2025-07-24T01:00:06Z) - HepatoGEN: Generating Hepatobiliary Phase MRI with Perceptual and Adversarial Models [33.7054351451505]
We propose a deep learning based approach for synthesizing hepatobiliary phase (HBP) images from earlier contrast phases.<n> Quantitative evaluation using pixel-wise and perceptual metrics, combined with blinded radiologist reviews, showed that pGAN achieved the best quantitative performance.<n>In contrast, the U-Net produced consistent liver enhancement with fewer artifacts, while DDPM underperformed due to limited preservation of fine structural details.
arXiv Detail & Related papers (2025-04-25T15:01:09Z) - Synthesizing Late-Stage Contrast Enhancement in Breast MRI: A Comprehensive Pipeline Leveraging Temporal Contrast Enhancement Dynamics [0.3499870393443268]
This study presents a pipeline for synthesizing late-phase DCE-MRI images from early-phase data.<n>The proposed approach introduces a novel loss function, Time Intensity Loss (TI-loss), leveraging the temporal behavior of contrast agents to guide the training of a generative model.<n>Two metrics are proposed to evaluate image quality: the Contrast Agent Pattern Score ($mathcalCP_s$), which validates enhancement patterns in annotated regions, and the Average Difference in Enhancement ($mathcalED$), measuring differences between real and generated enhancements.
arXiv Detail & Related papers (2024-09-03T04:31:49Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models [2.8981737432963506]
We propose a latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences.
Our results demonstrate our method's ability to generate realistic multi-sequence fat-saturated breast DCE-MRI.
arXiv Detail & Related papers (2024-03-20T18:01:57Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - A Deep Learning Approach for Virtual Contrast Enhancement in Contrast
Enhanced Spectral Mammography [1.1129469448121927]
This work proposes to use deep generative models for virtual contrast enhancement on Contrast Enhanced Spectral Mammography.
Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images.
arXiv Detail & Related papers (2023-08-01T11:49:05Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Negligible effect of brain MRI data preprocessing for tumor segmentation [36.89606202543839]
We conduct experiments on three publicly available datasets and evaluate the effect of different preprocessing steps in deep neural networks.
Our results demonstrate that most popular standardization steps add no value to the network performance.
We suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization.
arXiv Detail & Related papers (2022-04-11T17:29:36Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.