Improving Automated Hemorrhage Detection in Sparse-view Computed
Tomography via Deep Convolutional Neural Network based Artifact Reduction
- URL: http://arxiv.org/abs/2303.09340v3
- Date: Mon, 24 Jul 2023 11:34:21 GMT
- Title: Improving Automated Hemorrhage Detection in Sparse-view Computed
Tomography via Deep Convolutional Neural Network based Artifact Reduction
- Authors: Johannes Thalhammer, Manuel Schultheiss, Tina Dorosti, Tobias Lasser,
Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff
- Abstract summary: We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
- Score: 4.109026802238838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: Sparse-view computed tomography (CT) is an effective way to reduce
dose by lowering the total number of views acquired, albeit at the expense of
image quality, which, in turn, can impact the ability to detect diseases. We
explore deep learning-based artifact reduction in sparse-view cranial CT scans
and its impact on automated hemorrhage detection. Methods: We trained a U-Net
for artefact reduction on simulated sparse-view cranial CT scans from 3000
patients obtained from a public dataset and reconstructed with varying levels
of sub-sampling. Additionally, we trained a convolutional neural network on
fully sampled CT data from 17,545 patients for automated hemorrhage detection.
We evaluated the classification performance using the area under the receiver
operator characteristic curves (AUC-ROCs) with corresponding 95% confidence
intervals (CIs) and the DeLong test, along with confusion matrices. The
performance of the U-Net was compared to an analytical approach based on total
variation (TV). Results: The U-Net performed superior compared to unprocessed
and TV-processed images with respect to image quality and automated hemorrhage
diagnosis. With U-Net post-processing, the number of views can be reduced from
4096 (AUC-ROC: 0.974; 95% CI: 0.972-0.976) views to 512 views (0.973;
0.971-0.975) with minimal decrease in hemorrhage detection (P<.001) and to 256
views (0.967; 0.964-0.969) with a slight performance decrease (P<.001).
Conclusion: The results suggest that U-Net based artifact reduction
substantially enhances automated hemorrhage detection in sparse-view cranial
CTs. Our findings highlight that appropriate post-processing is crucial for
optimal image quality and diagnostic accuracy while minimizing radiation dose.
Related papers
- Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study [0.0]
This study evaluated the impact of multiple real-world dermoscopic views of a single lesion of interest on a CNN-based melanoma classifier.
Using multiple real-world images is an inexpensive method to positively impact the performance of a CNN-based melanoma classifier.
arXiv Detail & Related papers (2023-06-05T11:55:57Z) - Performance of a deep learning system for detection of referable
diabetic retinopathy in real clinical settings [0.0]
RetCAD v.1.3.1 was developed to automatically detect referable diabetic retinopathy (DR)
Analysed the reduction of workload that can be released incorporating this artificial intelligence-based technology.
arXiv Detail & Related papers (2022-05-11T14:59:10Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Critical Evaluation of Deep Neural Networks for Wrist Fracture Detection [1.0617212070722408]
Wrist Fracture is the most common type of fracture with a high incidence rate.
Recent advances in the field of Deep Learning (DL) have shown that wrist fracture detection can be automated using Convolutional Neural Networks.
Our results reveal that a typical state-of-the-art approach, such as DeepWrist, has a substantially lower performance on the challenging test set.
arXiv Detail & Related papers (2020-12-04T13:35:36Z) - Deep Sequential Learning for Cervical Spine Fracture Detection on
Computed Tomography Imaging [20.051649556262216]
We propose a deep convolutional neural network (DCNN) with a bidirectional long-short term memory (BLSTM) layer for the automated detection of cervical spine fractures in CT axial images.
We used an annotated dataset of 3,666 CT scans (729 positive and 2,937 negative cases) to train and validate the model.
The validation results show a classification accuracy of 70.92% and 79.18% on the balanced (104 positive and 104 negative cases) and imbalanced (104 positive and 419 negative cases) test datasets, respectively.
arXiv Detail & Related papers (2020-10-26T04:36:29Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Assessing Robustness to Noise: Low-Cost Head CT Triage [6.914268150661423]
We develop a model to triage head CTs and report an area under the receiver operating characteristic curve (AUROC) of 0.77.
We show that the trained model is robust to reduced tube current and fewer projections, with the AUROC dropping only 0.65% for images acquired with a 16x reduction in tube current and 0.22% for images acquired with 8x fewer projections.
arXiv Detail & Related papers (2020-03-17T22:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.