Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction
- URL: http://arxiv.org/abs/2303.09340v4
- Date: Thu, 8 Aug 2024 00:30:35 GMT
- Title: Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction
- Authors: Johannes Thalhammer, Manuel Schultheiss, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff,
- Abstract summary: We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We also trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis.
- Score: 3.9874211732430447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is a preprint. The latest version has been published here: https://pubs.rsna.org/doi/10.1148/ryai.230275 Purpose: Sparse-view computed tomography (CT) is an effective way to reduce dose by lowering the total number of views acquired, albeit at the expense of image quality, which, in turn, can impact the ability to detect diseases. We explore deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Methods: We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients obtained from a public dataset and reconstructed with varying levels of sub-sampling. Additionally, we trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection. We evaluated the classification performance using the area under the receiver operator characteristic curves (AUC-ROCs) with corresponding 95% confidence intervals (CIs) and the DeLong test, along with confusion matrices. The performance of the U-Net was compared to an analytical approach based on total variation (TV). Results: The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis. With U-Net post-processing, the number of views can be reduced from 4096 (AUC-ROC: 0.974; 95% CI: 0.972-0.976) views to 512 views (0.973; 0.971-0.975) with minimal decrease in hemorrhage detection (P<.001) and to 256 views (0.967; 0.964-0.969) with a slight performance decrease (P<.001). Conclusion: The results suggest that U-Net based artifact reduction substantially enhances automated hemorrhage detection in sparse-view cranial CTs. Our findings highlight that appropriate post-processing is crucial for optimal image quality and diagnostic accuracy while minimizing radiation dose.
Related papers
- Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study [0.0]
This study evaluated the impact of multiple real-world dermoscopic views of a single lesion of interest on a CNN-based melanoma classifier.
Using multiple real-world images is an inexpensive method to positively impact the performance of a CNN-based melanoma classifier.
arXiv Detail & Related papers (2023-06-05T11:55:57Z) - Performance of a deep learning system for detection of referable
diabetic retinopathy in real clinical settings [0.0]
RetCAD v.1.3.1 was developed to automatically detect referable diabetic retinopathy (DR)
Analysed the reduction of workload that can be released incorporating this artificial intelligence-based technology.
arXiv Detail & Related papers (2022-05-11T14:59:10Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Critical Evaluation of Deep Neural Networks for Wrist Fracture Detection [1.0617212070722408]
Wrist Fracture is the most common type of fracture with a high incidence rate.
Recent advances in the field of Deep Learning (DL) have shown that wrist fracture detection can be automated using Convolutional Neural Networks.
Our results reveal that a typical state-of-the-art approach, such as DeepWrist, has a substantially lower performance on the challenging test set.
arXiv Detail & Related papers (2020-12-04T13:35:36Z) - Deep Sequential Learning for Cervical Spine Fracture Detection on
Computed Tomography Imaging [20.051649556262216]
We propose a deep convolutional neural network (DCNN) with a bidirectional long-short term memory (BLSTM) layer for the automated detection of cervical spine fractures in CT axial images.
We used an annotated dataset of 3,666 CT scans (729 positive and 2,937 negative cases) to train and validate the model.
The validation results show a classification accuracy of 70.92% and 79.18% on the balanced (104 positive and 104 negative cases) and imbalanced (104 positive and 419 negative cases) test datasets, respectively.
arXiv Detail & Related papers (2020-10-26T04:36:29Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans [3.8137985834223502]
We develop and evaluate a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms.
We assess the effect on accuracy of image resolution and training set size.
arXiv Detail & Related papers (2020-09-18T00:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.