Deep Learning based detection of Acute Aortic Syndrome in contrast CT
images
- URL: http://arxiv.org/abs/2004.01648v1
- Date: Fri, 3 Apr 2020 16:12:04 GMT
- Title: Deep Learning based detection of Acute Aortic Syndrome in contrast CT
images
- Authors: Manikanta Srikar Yellapragada, Yiting Xie, Benedikt Graf, David
Richmond, Arun Krishnan, Arkadiusz Sitek
- Abstract summary: Acute aortic syndrome (AAS) is a group of life threatening conditions of the aorta.
We have developed an end-to-end automatic approach to detect AAS in computed tomography (CT) images.
- Score: 2.2928817466049405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acute aortic syndrome (AAS) is a group of life threatening conditions of the
aorta. We have developed an end-to-end automatic approach to detect AAS in
computed tomography (CT) images. Our approach consists of two steps. At first,
we extract N cross sections along the segmented aorta centerline for each CT
scan. These cross sections are stacked together to form a new volume which is
then classified using two different classifiers, a 3D convolutional neural
network (3D CNN) and a multiple instance learning (MIL). We trained, validated,
and compared two models on 2291 contrast CT volumes. We tested on a set aside
cohort of 230 normal and 50 positive CT volumes. Our models detected AAS with
an Area under Receiver Operating Characteristic curve (AUC) of 0.965 and 0.985
using 3DCNN and MIL, respectively.
Related papers
- Improved 3D Whole Heart Geometry from Sparse CMR Slices [3.701571763780745]
Cardiac magnetic resonance (CMR) imaging and computed tomography (CT) are two common non-invasive imaging methods for assessing patients with cardiovascular disease.
CMR typically acquires multiple sparse 2D slices, with unavoidable respiratory motion artefacts between slices, whereas CT acquires isotropic dense data but uses ionising radiation.
We explore the combination of Slice Shifting Algorithm (SSA), Spatial Transformer Network (STN), and Label Transformer Network (LTN) to: 1) correct respiratory motion between segmented slices, and 2) transform sparse segmentation data into dense segmentation.
arXiv Detail & Related papers (2024-08-14T13:03:48Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Segmentation of Aortic Vessel Tree in CT Scans with Deep Fully
Convolutional Networks [4.062948258086793]
Automatic and accurate segmentation of aortic vessel tree (AVT) in computed tomography (CT) scans is crucial for early detection, diagnosis and prognosis of aortic diseases.
We use two-stage fully convolutional networks (FCNs) to automatically segment AVT in scans from multiple centers.
arXiv Detail & Related papers (2023-05-16T22:24:01Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Are Macula or Optic Nerve Head Structures better at Diagnosing Glaucoma?
An Answer using AI and Wide-Field Optical Coherence Tomography [48.7576911714538]
We developed a deep learning algorithm to automatically segment structures of the optic nerve head (ONH) and macula in 3D wide-field OCT scans.
Our classification algorithm was able to segment ONH and macular tissues with a DC of 0.94 $pm$ 0.003.
This may encourage the mainstream adoption of 3D wide-field OCT scans.
arXiv Detail & Related papers (2022-10-13T01:51:29Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.