A Deep Learning Approach for Virtual Contrast Enhancement in Contrast
Enhanced Spectral Mammography
- URL: http://arxiv.org/abs/2308.00471v2
- Date: Thu, 3 Aug 2023 14:48:15 GMT
- Title: A Deep Learning Approach for Virtual Contrast Enhancement in Contrast
Enhanced Spectral Mammography
- Authors: Aurora Rofena, Valerio Guarrasi, Marina Sarli, Claudia Lucia Piccolo,
Matteo Sammarra, Bruno Beomonte Zobel, Paolo Soda
- Abstract summary: This work proposes to use deep generative models for virtual contrast enhancement on Contrast Enhanced Spectral Mammography.
Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images.
- Score: 1.1129469448121927
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic
imaging technique that first needs intravenously administration of an iodinated
contrast medium; then, it collects both a low-energy image, comparable to
standard mammography, and a high-energy image. The two scans are then combined
to get a recombined image showing contrast enhancement. Despite CESM diagnostic
advantages for breast cancer diagnosis, the use of contrast medium can cause
side effects, and CESM also beams patients with a higher radiation dose
compared to standard mammography. To address these limitations this work
proposes to use deep generative models for virtual contrast enhancement on
CESM, aiming to make the CESM contrast-free as well as to reduce the radiation
dose. Our deep networks, consisting of an autoencoder and two Generative
Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic
recombined images solely from low-energy images. We perform an extensive
quantitative and qualitative analysis of the model's performance, also
exploiting radiologists' assessments, on a novel CESM dataset that includes
1138 images that, as a further contribution of this work, we make publicly
available. The results show that CycleGAN is the most promising deep network to
generate synthetic recombined images, highlighting the potential of artificial
intelligence techniques for virtual contrast enhancement in this field.
Related papers
- A Time-Intensity Aware Pipeline for Generating Late-Stage Breast DCE-MRI using Generative Adversarial Models [0.3499870393443268]
A novel loss function that leverages the biological behavior of contrast agent (CA) in tissue is proposed to optimize a pixel-attention based generative model.
Unlike traditional normalization and standardization methods, we developed a new normalization strategy that maintains the contrast enhancement pattern across the image sequences at multiple timestamps.
arXiv Detail & Related papers (2024-09-03T04:31:49Z) - CAVM: Conditional Autoregressive Vision Model for Contrast-Enhanced Brain Tumor MRI Synthesis [3.3966430276631208]
Conditional Autoregressive Vision Model improves synthesis of contrast-enhanced brain tumor MRI.
Deep learning methods have been applied to synthesize virtual contrast-enhanced MRI scans from non-contrast images.
Inspired by the resemblance between the gradual dose increase and the Chain-of-Thought approach in natural language processing, CAVM uses an autoregressive strategy.
arXiv Detail & Related papers (2024-06-23T10:50:22Z) - Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models [2.8981737432963506]
We propose a latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences.
Our results demonstrate our method's ability to generate realistic multi-sequence fat-saturated breast DCE-MRI.
arXiv Detail & Related papers (2024-03-20T18:01:57Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Synthesis of Contrast-Enhanced Breast MRI Using Multi-b-Value DWI-based
Hierarchical Fusion Network with Attention Mechanism [15.453470023481932]
Contrast-enhanced MRI (CE-MRI) provides superior differentiation between tumors and invaded healthy tissue.
The use of gadolinium-based contrast agents (GBCA) to obtain CE-MRI may be associated with nephrogenic systemic fibrosis and may lead to bioaccumulation in the brain.
To reduce the use of contrast agents, diffusion-weighted imaging (DWI) is emerging as a key imaging technique.
arXiv Detail & Related papers (2023-07-03T09:46:12Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Prediction of low-keV monochromatic images from polyenergetic CT scans
for improved automatic detection of pulmonary embolism [21.47219330040151]
We are training convolutional neural networks that can emulate the generation of monoE images from conventional single energy CT acquisitions.
We expand on these methods through the use of a multi-task optimization approach, under which the networks achieve improved classification as well as generation results.
arXiv Detail & Related papers (2021-02-02T11:42:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.