3D helical CT Reconstruction with a Memory Efficient Learned Primal-Dual
Architecture
- URL: http://arxiv.org/abs/2205.11952v3
- Date: Tue, 28 Nov 2023 20:13:20 GMT
- Title: 3D helical CT Reconstruction with a Memory Efficient Learned Primal-Dual
Architecture
- Authors: Jevgenija Rudzusika, Buda Baji\'c, Thomas Koehler, Ozan \"Oktem
- Abstract summary: This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so it can be trained and applied to reconstruction in this setting.
It is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data.
- Score: 1.3518297878940662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based computed tomography (CT) reconstruction has demonstrated
outstanding performance on simulated 2D low-dose CT data. This applies in
particular to domain adapted neural networks, which incorporate a handcrafted
physics model for CT imaging. Empirical evidence shows that employing such
architectures reduces the demand for training data and improves upon
generalisation. However, their training requires large computational resources
that quickly become prohibitive in 3D helical CT, which is the most common
acquisition geometry used for medical imaging. Furthermore, clinical data also
comes with other challenges not accounted for in simulations, like errors in
flux measurement, resolution mismatch and, most importantly, the absence of the
real ground truth. The necessity to have a computationally feasible training
combined with the need to address these issues has made it difficult to
evaluate deep learning based reconstruction on clinical 3D helical CT. This
paper modifies a domain adapted neural network architecture, the Learned
Primal-Dual (LPD), so that it can be trained and applied to reconstruction in
this setting. We achieve this by splitting the helical trajectory into sections
and applying the unrolled LPD iterations to those sections sequentially. To the
best of our knowledge, this work is the first to apply an unrolled deep
learning architecture for reconstruction on full-sized clinical data, like
those in the Low dose CT image and projection data set (LDCT). Moreover,
training and testing is done on a single GPU card with 24GB of memory.
Related papers
- Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review [0.0]
Deep learning techniques have been used to improve image quality in cone-beam computed tomography (CBCT)
We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT.
One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models.
arXiv Detail & Related papers (2024-03-27T13:46:01Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Enforcing connectivity of 3D linear structures using their 2D
projections [54.0598511446694]
We propose to improve the 3D connectivity of our results by minimizing a sum of topology-aware losses on their 2D projections.
This suffices to increase the accuracy and to reduce the annotation effort required to provide the required annotated training data.
arXiv Detail & Related papers (2022-07-14T11:42:18Z) - Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised
and Adversarial Learning from Scarce Training Data [27.325532306485755]
As the number of available projections decreases, traditional reconstruction techniques perform poorly.
Deep learning-based reconstruction have garnered a lot of attention in applications because they yield better performance when enough training data is available.
This work focuses on image reconstruction in such settings, when both the number of available CT projections and the training data is extremely limited.
arXiv Detail & Related papers (2022-01-23T17:08:52Z) - A Deep-Learning Approach For Direct Whole-Heart Mesh Reconstruction [1.8047694351309207]
We propose a novel deep-learning-based approach that directly predicts whole heart surface meshes from volumetric CT and MR image data.
Our method demonstrated promising performance of generating high-resolution and high-quality whole heart reconstructions.
arXiv Detail & Related papers (2021-02-16T00:39:43Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.