Feature-oriented Deep Learning Framework for Pulmonary Cone-beam CT
(CBCT) Enhancement with Multi-task Customized Perceptual Loss
- URL: http://arxiv.org/abs/2311.00412v1
- Date: Wed, 1 Nov 2023 10:09:01 GMT
- Title: Feature-oriented Deep Learning Framework for Pulmonary Cone-beam CT
(CBCT) Enhancement with Multi-task Customized Perceptual Loss
- Authors: Jiarui Zhu, Werxing Chen, Hongfei Sun, Shaohua Zhi, Jing Qin, Jing
Cai, Ge Ren
- Abstract summary: Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy.
Recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts.
We propose a novel feature-oriented deep learning framework that translates low-quality CBCT images into high-quality CT-like imaging.
- Score: 9.59233136691378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cone-beam computed tomography (CBCT) is routinely collected during
image-guided radiation therapy (IGRT) to provide updated patient anatomy
information for cancer treatments. However, CBCT images often suffer from
streaking artifacts and noise caused by under-rate sampling projections and
low-dose exposure, resulting in low clarity and information loss. While recent
deep learning-based CBCT enhancement methods have shown promising results in
suppressing artifacts, they have limited performance on preserving anatomical
details since conventional pixel-to-pixel loss functions are incapable of
describing detailed anatomy. To address this issue, we propose a novel
feature-oriented deep learning framework that translates low-quality CBCT
images into high-quality CT-like imaging via a multi-task customized
feature-to-feature perceptual loss function. The framework comprises two main
components: a multi-task learning feature-selection network(MTFS-Net) for
customizing the perceptual loss function; and a CBCT-to-CT translation network
guided by feature-to-feature perceptual loss, which uses advanced generative
models such as U-Net, GAN and CycleGAN. Our experiments showed that the
proposed framework can generate synthesized CT (sCT) images for the lung that
achieved a high similarity to CT images, with an average SSIM index of 0.9869
and an average PSNR index of 39.9621. The sCT images also achieved visually
pleasing performance with effective artifacts suppression, noise reduction, and
distinctive anatomical details preservation. Our experiment results indicate
that the proposed framework outperforms the state-of-the-art models for
pulmonary CBCT enhancement. This framework holds great promise for generating
high-quality anatomical imaging from CBCT that is suitable for various clinical
applications.
Related papers
- Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - Deep Few-view High-resolution Photon-counting Extremity CT at Halved Dose for a Clinical Trial [8.393536317952085]
We propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed in a New Zealand clinical trial.
We present a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and real-world data.
arXiv Detail & Related papers (2024-03-19T00:07:48Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Total-Body Low-Dose CT Image Denoising using Prior Knowledge Transfer
Technique with Contrastive Regularization Mechanism [4.998352078907441]
Low radiation dose may result in increased noise and artifacts, which greatly affected the clinical diagnosis.
To obtain high-quality Total-body Low-dose CT (LDCT) images, previous deep-learning-based research work has introduced various network architectures.
In this paper, we propose a novel intra-task knowledge transfer method that leverages the distilled knowledge from NDCT images.
arXiv Detail & Related papers (2021-12-01T06:46:38Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Prediction of low-keV monochromatic images from polyenergetic CT scans
for improved automatic detection of pulmonary embolism [21.47219330040151]
We are training convolutional neural networks that can emulate the generation of monoE images from conventional single energy CT acquisitions.
We expand on these methods through the use of a multi-task optimization approach, under which the networks achieve improved classification as well as generation results.
arXiv Detail & Related papers (2021-02-02T11:42:31Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.