Synthesizing Late-Stage Contrast Enhancement in Breast MRI: A Comprehensive Pipeline Leveraging Temporal Contrast Enhancement Dynamics
- URL: http://arxiv.org/abs/2409.01596v2
- Date: Fri, 24 Jan 2025 21:21:08 GMT
- Title: Synthesizing Late-Stage Contrast Enhancement in Breast MRI: A Comprehensive Pipeline Leveraging Temporal Contrast Enhancement Dynamics
- Authors: Ruben D. Fonnegra, Maria Liliana Hernández, Juan C. Caicedo, Gloria M. Díaz,
- Abstract summary: This study presents a pipeline for synthesizing late-phase DCE-MRI images from early-phase data.<n>The proposed approach introduces a novel loss function, Time Intensity Loss (TI-loss), leveraging the temporal behavior of contrast agents to guide the training of a generative model.<n>Two metrics are proposed to evaluate image quality: the Contrast Agent Pattern Score ($mathcalCP_s$), which validates enhancement patterns in annotated regions, and the Average Difference in Enhancement ($mathcalED$), measuring differences between real and generated enhancements.
- Score: 0.3499870393443268
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is essential for breast cancer diagnosis due to its ability to characterize tissue through contrast agent kinetics. However, traditional DCE-MRI protocols require multiple imaging phases, including early and late post-contrast acquisitions, leading to prolonged scan times, patient discomfort, motion artifacts, high costs, and limited accessibility. To overcome these limitations, this study presents a pipeline for synthesizing late-phase DCE-MRI images from early-phase data, replicating the time-intensity (TI) curve behavior in enhanced regions while maintaining visual fidelity across the entire image. The proposed approach introduces a novel loss function, Time Intensity Loss (TI-loss), leveraging the temporal behavior of contrast agents to guide the training of a generative model. Additionally, a new normalization strategy, TI-norm, preserves the contrast enhancement pattern across multiple image sequences at various timestamps, addressing limitations of conventional normalization methods. Two metrics are proposed to evaluate image quality: the Contrast Agent Pattern Score ($\mathcal{CP}_{s}$), which validates enhancement patterns in annotated regions, and the Average Difference in Enhancement ($\mathcal{ED}$), measuring differences between real and generated enhancements. Using a public DCE-MRI dataset with 1.5T and 3T scanners, the proposed method demonstrates accurate synthesis of late-phase images that outperform existing models in replicating the TI curve's behavior in regions of interest while preserving overall image quality. This advancement shows a potential to optimize DCE-MRI protocols by reducing scanning time without compromising diagnostic accuracy, and bringing generative models closer to practical implementation in clinical scenarios to enhance efficiency in breast cancer imaging.
Related papers
- HepatoGEN: Generating Hepatobiliary Phase MRI with Perceptual and Adversarial Models [33.7054351451505]
We propose a deep learning based approach for synthesizing hepatobiliary phase (HBP) images from earlier contrast phases.
Quantitative evaluation using pixel-wise and perceptual metrics, combined with blinded radiologist reviews, showed that pGAN achieved the best quantitative performance.
In contrast, the U-Net produced consistent liver enhancement with fewer artifacts, while DDPM underperformed due to limited preservation of fine structural details.
arXiv Detail & Related papers (2025-04-25T15:01:09Z) - Deep Multi-contrast Cardiac MRI Reconstruction via vSHARP with Auxiliary Refinement Network [7.043932618116216]
We propose a deep learning-based reconstruction method for 2D dynamic multi-contrast, multi-scheme, and multi-acceleration MRI.
Our approach integrates the state-of-the-art vSHARP model, which utilizes half-quadratic variable splitting and ADMM optimization.
arXiv Detail & Related papers (2024-11-02T15:59:35Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction [3.9681863841849623]
We build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them.
Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process.
Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.
arXiv Detail & Related papers (2024-05-09T05:51:33Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models [2.8981737432963506]
We propose a latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences.
Our results demonstrate our method's ability to generate realistic multi-sequence fat-saturated breast DCE-MRI.
arXiv Detail & Related papers (2024-03-20T18:01:57Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Latent Diffusion Model for Medical Image Standardization and Enhancement [11.295078152769559]
DiffusionCT is a score-based DDPM model that transforms disparate non-standard distributions into a standardized form.
The architecture comprises a U-Net-based encoder-decoder, augmented by a DDPM model integrated at the bottleneck position.
Empirical tests on patient CT images indicate notable improvements in image standardization using DiffusionCT.
arXiv Detail & Related papers (2023-10-08T17:11:14Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - MR-Contrast-Aware Image-to-Image Translations with Generative
Adversarial Networks [5.3580471186206005]
We train an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.
Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.
arXiv Detail & Related papers (2021-04-03T17:05:13Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.