Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review
- URL: http://arxiv.org/abs/2403.18565v1
- Date: Wed, 27 Mar 2024 13:46:01 GMT
- Title: Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review
- Authors: Mohammadreza Amirian, Daniel Barco, Ivo Herzig, Frank-Peter Schilling,
- Abstract summary: Deep learning techniques have been used to improve image quality in cone-beam computed tomography (CBCT)
We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT.
One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning based approaches have been used to improve image quality in cone-beam computed tomography (CBCT), a medical imaging technique often used in applications such as image-guided radiation therapy, implant dentistry or orthopaedics. In particular, while deep learning methods have been applied to reduce various types of CBCT image artifacts arising from motion, metal objects, or low-dose acquisition, a comprehensive review summarizing the successes and shortcomings of these approaches, with a primary focus on the type of artifacts rather than the architecture of neural networks, is lacking in the literature. In this review, the data generation and simulation pipelines, and artifact reduction techniques are specifically investigated for each type of artifact. We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT through the use of projection- and/or volume-domain optimizations, or by introducing neural networks directly within the CBCT reconstruction algorithms. Research gaps are identified to suggest avenues for future exploration. One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models, accompanied with the need for more diverse and open training datasets and simulations.
Related papers
- Multi-stage Deep Learning Artifact Reduction for Computed Tomography [0.0]
We propose a multi-stage deep learning method for artifact removal, in which neural networks are applied to several domains.
We show that the neural networks can be effectively trained in succession, resulting in easy-to-use and computationally efficient training.
arXiv Detail & Related papers (2023-09-01T14:40:25Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs [8.736194193307451]
Recent findings on natural images suggest that deep neural models can show a textural bias when carrying out image classification tasks.
This study aims to investigate ways in which addressing the textural bias phenomenon could be used to bring up the robustness and transferability of deep segmentation models.
arXiv Detail & Related papers (2020-11-30T18:29:53Z) - Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT
Image Data [63.73263986460191]
Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions.
We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance.
Using 4D information for the model input improves performance while maintaining reasonable inference times.
arXiv Detail & Related papers (2020-04-21T15:43:01Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Learned Spectral Computed Tomography [0.0]
We propose a Deep Learning imaging method for Spectral Photon-Counting Computed Tomography.
The method takes the form of a two-step learned primal-dual algorithm that is trained using case-specific data.
The proposed approach is characterised by fast reconstruction capability and high imaging performance, even in limited-data cases.
arXiv Detail & Related papers (2020-03-09T13:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.