Improving image quality of sparse-view lung tumor CT images with U-Net
- URL: http://arxiv.org/abs/2307.15506v4
- Date: Wed, 14 Feb 2024 15:42:49 GMT
- Title: Improving image quality of sparse-view lung tumor CT images with U-Net
- Authors: Annika Ries, Tina Dorosti, Johannes Thalhammer, Daniel Sasse, Andreas
Sauter, Felix Meurer, Ashley Benne, Tobias Lasser, Franz Pfeiffer, Florian
Schaff, Daniela Pfeiffer
- Abstract summary: We aimed at improving image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection.
Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level.
- Score: 3.5655865803527718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: We aimed at improving image quality (IQ) of sparse-view computed
tomography (CT) images using a U-Net for lung metastasis detection and
determining the best tradeoff between number of views, IQ, and diagnostic
confidence.
Methods: CT images from 41 subjects aged 62.8 $\pm$ 10.6 years (mean $\pm$
standard deviation), 23 men, 34 with lung metastasis, 7 healthy, were
retrospectively selected (2016-2018) and forward projected onto 2,048-view
sinograms. Six corresponding sparse-view CT data subsets at varying levels of
undersampling were reconstructed from sinograms using filtered backprojection
with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and
evaluated for each subsampling level on 8,658 images from 22 diseased subjects.
A representative image per scan was selected from 19 subjects (12 diseased, 7
healthy) for a single-blinded multireader study. These slices, for all levels
of subsampling, with and without U-Net postprocessing, were presented to three
readers. IQ and diagnostic confidence were ranked using predefined scales.
Subjective nodule segmentation was evaluated using sensitivity and Dice
similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used.
Results: The 64-projection sparse-view images resulted in 0.89 sensitivity
and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had
improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led
to insufficient IQ for diagnosis. For increased views, no substantial
discrepancies were noted between sparse-view and postprocessed images.
Conclusions: Projection views can be reduced from 2,048 to 64 while
maintaining IQ and the confidence of the radiologists on a satisfactory level.
Related papers
- Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction [3.9874211732430447]
We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We also trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis.
arXiv Detail & Related papers (2023-03-16T14:21:45Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Classification of COVID-19 Patients with their Severity Level from Chest
CT Scans using Transfer Learning [3.667495151642095]
The rapid increment in cases of COVID-19 has led to an increase in demand for hospital beds and other medical equipment.
Keeping this in mind, we share our research in detecting COVID-19 as well as assessing its severity using chest-CT scans and Deep Learning pre-trained models.
Our model can therefore help radiologists detect COVID-19 and the extent of its severity.
arXiv Detail & Related papers (2022-05-27T06:22:09Z) - COVID-19 Severity Classification on Chest X-ray Images [0.0]
In this work, we classify covid images based on the severity of the infection.
The ResNet-50 model produced remarkable classification results in terms of accuracy 95%, recall (0.94), and F1-Score (0.92), and precision (0.91)
arXiv Detail & Related papers (2022-05-25T12:01:03Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Pointwise visual field estimation from optical coherence tomography in
glaucoma: a structure-function analysis using deep learning [12.70143462176992]
Standard Automated Perimetry (SAP) is the gold standard to monitor visual field (VF) loss in glaucoma management.
We developed and validated a deep learning (DL) regression model that estimates pointwise and overall VF loss from unsegmented optical coherence tomography ( OCT) scans.
arXiv Detail & Related papers (2021-06-07T16:58:38Z) - Towards Ultrafast MRI via Extreme k-Space Undersampling and
Superresolution [65.25508348574974]
We go below the MRI acceleration factors reported by all published papers that reference the original fastMRI challenge.
We consider powerful deep learning based image enhancement methods to compensate for the underresolved images.
The quality of the reconstructed images surpasses that of the other methods, yielding an MSE of 0.00114, a PSNR of 29.6 dB, and an SSIM of 0.956 at x16 acceleration factor.
arXiv Detail & Related papers (2021-03-04T10:45:01Z) - Automatic classification of multiple catheters in neonatal radiographs
with deep learning [2.256008196530956]
We develop and evaluate a deep learning algorithm to classify multiple catheters on neonatal chest and abdominal radiographs.
A convolutional neural network (CNN) was trained using a dataset of 777 neonatal chest and abdominal radiographs.
arXiv Detail & Related papers (2020-11-14T21:27:21Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.