Metal-conscious Embedding for CBCT Projection Inpainting
- URL: http://arxiv.org/abs/2211.16219v1
- Date: Tue, 29 Nov 2022 13:55:49 GMT
- Title: Metal-conscious Embedding for CBCT Projection Inpainting
- Authors: Fuxin Fan, Yangkong Wang, Ludwig Ritschl, Ramyar Biniazan, Marcel
Beister, Bj\"orn Kreher, Yixing Huang, Steffen Kappler, and Andreas Maier
- Abstract summary: The existence of metallic implants in projection images for cone-beam computed tomography (CBCT) introduces undesired artifacts.
In this work, a hybrid network combining the shift window (Swin) vision transformer (ViT) and a convolutional neural network is proposed as a baseline network for the inpainting task.
- Score: 6.94542730064006
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The existence of metallic implants in projection images for cone-beam
computed tomography (CBCT) introduces undesired artifacts which degrade the
quality of reconstructed images. In order to reduce metal artifacts, projection
inpainting is an essential step in many metal artifact reduction algorithms. In
this work, a hybrid network combining the shift window (Swin) vision
transformer (ViT) and a convolutional neural network is proposed as a baseline
network for the inpainting task. To incorporate metal information for the Swin
ViT-based encoder, metal-conscious self-embedding and neighborhood-embedding
methods are investigated. Both methods have improved the performance of the
baseline network. Furthermore, by choosing appropriate window size, the model
with neighborhood-embedding could achieve the lowest mean absolute error of
0.079 in metal regions and the highest peak signal-to-noise ratio of 42.346 in
CBCT projections. At the end, the efficiency of metal-conscious embedding on
both simulated and real cadaver CBCT data has been demonstrated, where the
inpainting capability of the baseline network has been enhanced.
Related papers
- TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising [94.09442506816724]
Blind-spot networks (BSN) have been prevalent network architectures in self-supervised image denoising (SSID)
We present a transformer-based blind-spot network (TBSN) by analyzing and redesigning the transformer operators that meet the blind-spot requirement.
For spatial self-attention, an elaborate mask is applied to the attention matrix to restrict its receptive field, thus mimicking the dilated convolution.
For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.
arXiv Detail & Related papers (2024-04-11T15:39:10Z) - MARformer: An Efficient Metal Artifact Reduction Transformer for Dental CBCT Images [53.62335292022492]
Metal teeth implants could bring annoying metal artifacts during the CBCT imaging process.
We develop an efficient Transformer to perform metal artifacts reduction (MAR) from dental CBCT images.
A Patch-wise Perceptive Feed Forward Network (P2FFN) is also proposed to perceive local image information for fine-grained restoration.
arXiv Detail & Related papers (2023-11-16T06:02:03Z) - Dense Transformer based Enhanced Coding Network for Unsupervised Metal
Artifact Reduction [27.800525536048944]
We propose a novel Dense Transformer based Enhanced Coding Network (DTEC-Net) for unsupervised metal artifact reduction.
Specifically, we introduce a Hierarchical Disentangling, supported by the high-order dense process, and transformer to obtain densely encoded sequences.
Experiments and model discussions illustrate DTEC-Net's effectiveness, which outperforms the previous state-of-the-art methods on a benchmark dataset.
arXiv Detail & Related papers (2023-07-24T11:58:58Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Metal Inpainting in CBCT Projections Using Score-based Generative Model [8.889876750552615]
In this work, a score-based generative model is trained on simulated knee projections and the inpainted image is obtained by removing the noise in conditional resampling process.
The result implies that the inpainted images by score-based generative model have more detailed information and achieve the lowest mean absolute error and the highest peak-signal-to-noise-ratio.
arXiv Detail & Related papers (2022-09-20T14:07:39Z) - Metal artifact correction in cone beam computed tomography using
synthetic X-ray data [0.0]
Metal implants inserted into the anatomy cause severe artifacts in reconstructed images.
One approach is to use a deep learning method to segment metals in the projections.
We show that simulations with relatively small number of photons are suitable for the metal segmentation task.
arXiv Detail & Related papers (2022-08-17T13:31:38Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Simulation-Driven Training of Vision Transformers Enabling Metal
Segmentation in X-Ray Images [6.416928579907334]
This study proposes to generate simulated X-ray images based on CT data sets combined with computer aided design (CAD) implants.
The metal segmentation in CBCT projections serves as a prerequisite for metal artifact avoidance and reduction algorithms.
Our study indicates that the CAD model-based data generation has high flexibility and could be a way to overcome the problem of shortage in clinical data sampling and labelling.
arXiv Detail & Related papers (2022-03-17T09:58:58Z) - Metal Artifact Reduction in 2D CT Images with Self-supervised
Cross-domain Learning [30.977044473457]
We present a novel deep-learning-based approach for metal artifact reduction (MAR)
We train a neural network to restore the metal trace region values in the given metal-free sinogram.
We then design a novel FBP reconstruction loss to encourage the network to generate more perfect completion results.
arXiv Detail & Related papers (2021-09-28T04:40:57Z) - A Learning-based Method for Online Adjustment of C-arm Cone-Beam CT
Source Trajectories for Artifact Avoidance [47.345403652324514]
The reconstruction quality attainable with commercial CBCT devices is insufficient due to metal artifacts in the presence of pedicle screws.
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task.
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory.
arXiv Detail & Related papers (2020-08-14T09:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.