Fast and accurate sparse-view CBCT reconstruction using meta-learned
neural attenuation field and hash-encoding regularization
- URL: http://arxiv.org/abs/2312.01689v2
- Date: Wed, 17 Jan 2024 01:29:23 GMT
- Title: Fast and accurate sparse-view CBCT reconstruction using meta-learned
neural attenuation field and hash-encoding regularization
- Authors: Heejun Shin, Taehee Kim, Jongho Lee, Se Young Chun, Seungryung Cho,
Dongmyung Shin
- Abstract summary: Cone beam computed tomography (CBCT) is an emerging medical imaging technique to visualize the internal anatomical structures of patients.
reducing the number of projections in a CBCT scan while preserving the quality of a reconstructed image is challenging.
We propose a fast and accurate sparse-view CBCT reconstruction (FACT) method to provide better reconstruction quality and faster optimization speed.
- Score: 13.01191568245715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cone beam computed tomography (CBCT) is an emerging medical imaging technique
to visualize the internal anatomical structures of patients. During a CBCT
scan, several projection images of different angles or views are collectively
utilized to reconstruct a tomographic image. However, reducing the number of
projections in a CBCT scan while preserving the quality of a reconstructed
image is challenging due to the nature of an ill-posed inverse problem.
Recently, a neural attenuation field (NAF) method was proposed by adopting a
neural radiance field algorithm as a new way for CBCT reconstruction,
demonstrating fast and promising results using only 50 views. However,
decreasing the number of projections is still preferable to reduce potential
radiation exposure, and a faster reconstruction time is required considering a
typical scan time. In this work, we propose a fast and accurate sparse-view
CBCT reconstruction (FACT) method to provide better reconstruction quality and
faster optimization speed in the minimal number of view acquisitions ($<$ 50
views). In the FACT method, we meta-trained a neural network and a hash-encoder
using a few scans (= 15), and a new regularization technique is utilized to
reconstruct the details of an anatomical structure. In conclusion, we have
shown that the FACT method produced better, and faster reconstruction results
over the other conventional algorithms based on CBCT scans of different body
parts (chest, head, and abdomen) and CT vendors (Siemens, Phillips, and GE).
Related papers
- Intensity Field Decomposition for Tissue-Guided Neural Tomography [30.81166574148901]
This article introduces a novel sparse-view CBCT reconstruction method, which empowers the neural field with human tissue regularization.
Our approach, termed tissue-guided neural tomography (TNT), is motivated by the distinct intensity differences between bone and soft tissue in CBCT.
Our method achieves comparable reconstruction quality with fewer projections and faster convergence compared to state-of-the-art neural rendering based methods.
arXiv Detail & Related papers (2024-11-01T06:31:53Z) - AC-IND: Sparse CT reconstruction based on attenuation coefficient estimation and implicit neural distribution [12.503822675024054]
Computed tomography (CT) reconstruction plays a crucial role in industrial nondestructive testing and medical diagnosis.
Sparse view CT reconstruction aims to reconstruct high-quality CT images while only using a small number of projections.
We introduce AC-IND, a self-supervised method based on Attenuation Coefficient Estimation and Implicit Neural Distribution.
arXiv Detail & Related papers (2024-09-11T10:34:41Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Enhancing Low-dose CT Image Reconstruction by Integrating Supervised and
Unsupervised Learning [13.17680480211064]
We propose a hybrid supervised-unsupervised learning framework for X-ray computed tomography (CT) image reconstruction.
Each proposed trained block consists of a deterministic MBIR solver and a neural network.
We demonstrate the efficacy of this learned hybrid model for low-dose CT image reconstruction with limited training data.
arXiv Detail & Related papers (2023-11-19T20:23:59Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - SNAF: Sparse-view CBCT Reconstruction with Neural Attenuation Fields [71.84366290195487]
We propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields.
Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views.
arXiv Detail & Related papers (2022-11-30T14:51:14Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - The Application of Convolutional Neural Networks for Tomographic
Reconstruction of Hyperspectral Images [0.0]
A novel method, utilizing convolutional neural networks (CNNs), is proposed to reconstruct hyperspectral cubes from computed imaging spectrometer (CTIS) images.
CNNs deliver higher precision and shorter reconstruction time than a standard expectation algorithm.
arXiv Detail & Related papers (2021-08-30T18:11:08Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - A computationally efficient reconstruction algorithm for circular
cone-beam computed tomography using shallow neural networks [0.0]
We introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm.
It adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency.
We show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.
arXiv Detail & Related papers (2020-10-01T14:10:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.