A Deep learning Approach to Generate Contrast-Enhanced Computerised
Tomography Angiography without the Use of Intravenous Contrast Agents
- URL: http://arxiv.org/abs/2003.01223v1
- Date: Mon, 2 Mar 2020 22:20:08 GMT
- Title: A Deep learning Approach to Generate Contrast-Enhanced Computerised
Tomography Angiography without the Use of Intravenous Contrast Agents
- Authors: Anirudh Chandrashekar, Ashok Handa, Natesh Shivakumar, Pierfrancesco
Lapolla, Vicente Grau, Regent Lee
- Abstract summary: We trained a 2-D Cycle Generative Adversarial Network for this non-contrast to contrast (NC2C) transformation task.
This pipeline is able to differentiate between visually incoherent soft tissue regions in non-contrast CT images.
- Score: 2.2840399926157806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrast-enhanced computed tomography angiograms (CTAs) are widely used in
cardiovascular imaging to obtain a non-invasive view of arterial structures.
However, contrast agents are associated with complications at the injection
site as well as renal toxicity leading to contrast-induced nephropathy (CIN)
and renal failure. We hypothesised that the raw data acquired from a
non-contrast CT contains sufficient information to differentiate blood and
other soft tissue components. We utilised deep learning methods to define the
subtleties between soft tissue components in order to simulate contrast
enhanced CTAs without contrast agents. Twenty-six patients with paired
non-contrast and CTA images were randomly selected from an approved clinical
study. Non-contrast axial slices within the AAA from 10 patients (n = 100) were
sampled for the underlying Hounsfield unit (HU) distribution at the lumen,
intra-luminal thrombus and interface locations. Sampling of HUs in these
regions revealed significant differences between all regions (p<0.001 for all
comparisons), confirming the intrinsic differences in the radiomic signatures
between these regions. To generate a large training dataset, paired axial
slices from the training set (n=13) were augmented to produce a total of 23,551
2-D images. We trained a 2-D Cycle Generative Adversarial Network (cycleGAN)
for this non-contrast to contrast (NC2C) transformation task. The accuracy of
the cycleGAN output was assessed by comparison to the contrast image. This
pipeline is able to differentiate between visually incoherent soft tissue
regions in non-contrast CT images. The CTAs generated from the non-contrast
images bear strong resemblance to the ground truth. Here we describe a novel
application of Generative Adversarial Network for CT image processing. This is
poised to disrupt clinical pathways requiring contrast enhanced CT imaging.
Related papers
- Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation [3.88838725116957]
Anatomical structures in contrast-enhanced CT (ceCT) images can be challenging to segment due to variability in contrast medium diffusion.
To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it.
CycleGAN has attracted particular attention because it alleviates the need for paired data.
We present an extension of CycleGAN to generate high fidelity images, with good structural consistency.
arXiv Detail & Related papers (2022-10-04T16:14:49Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Image translation of Ultrasound to Pseudo Anatomical Display Using
Artificial Intelligence [0.0]
CycleGAN was used to learn each domain properties separately and enforce cross domain cycle consistency.
The generated pseudo anatomical images provide improved visual discrimination of the lesions with clearer border definition and pronounced contrast.
arXiv Detail & Related papers (2022-02-16T13:31:49Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Symmetry-Enhanced Attention Network for Acute Ischemic Infarct
Segmentation with Non-Contrast CT Images [50.55978219682419]
We propose a symmetry enhanced attention network (SEAN) for acute ischemic infarct segmentation.
Our proposed network automatically transforms an input CT image into the standard space where the brain tissue is bilaterally symmetric.
The proposed SEAN outperforms some symmetry-based state-of-the-art methods in terms of both dice coefficient and infarct localization.
arXiv Detail & Related papers (2021-10-11T07:13:26Z) - Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography [2.752817022620644]
This paper outlines a U-net architecture with novel preprocessing techniques to segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans.
We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
arXiv Detail & Related papers (2020-08-12T10:48:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - A Deep Learning Approach to Automate High-Resolution Blood Vessel
Reconstruction on Computerized Tomography Images With or Without the Use of
Contrast Agent [2.1897279580410896]
A blood clot or thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in 70-80% of cases.
We implemented a modified U-Net architecture with attention-gating to establish a high- throughput pipeline of pathological blood vessels.
This extracted volume can be used to standardize current methods of aneurysmal disease management and set the foundation for subsequent complex geometric and morphological analysis.
arXiv Detail & Related papers (2020-02-09T22:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.