Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation
- URL: http://arxiv.org/abs/2307.10182v3
- Date: Sun, 2 Jun 2024 15:47:23 GMT
- Title: Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation
- Authors: Zeyu Tang, Xiaodan Xing, Guang Yang,
- Abstract summary: Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging.
procuring appropriate training data for these Super-Resolution (SR) models is challenging.
Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs.
We introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms.
- Score: 4.43162303545687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging. However, procuring appropriate training data for these Super-Resolution (SR) models is challenging. Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs. However, these methods either rely on simplistic interpolation techniques that lack realism or sinogram reconstruction, which require the release of raw data and complex reconstruction algorithms. Thus, we introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms. The training pairs produced by our method closely resemble real data distributions (PSNR=49.74 vs. 40.66, p$<$0.05). A multivariate Cox regression analysis involving thick slice CT images with lung fibrosis revealed that only the radiomics features extracted using our method demonstrated a significant correlation with mortality (HR=1.19 and HR=1.14, p$<$0.005). This paper represents the first to identify and address the challenge of generating appropriate paired training data for Deep Learning-based CT SR models, which enhances the efficacy and applicability of SR models in real-world scenarios.
Related papers
- Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - Low-Dose CT Image Reconstruction by Fine-Tuning a UNet Pretrained for
Gaussian Denoising for the Downstream Task of Image Enhancement [3.7960472831772765]
Computed Tomography (CT) is a widely used medical imaging modality, and reconstruction from low-dose CT data is a challenging task.
In this paper, we propose a less complex two-stage method for reconstruction of LDCT images.
The proposed method achieves a shared top ranking in the LoDoPaB-CT challenge and a first position with respect to the SSIM metric.
arXiv Detail & Related papers (2024-03-06T08:51:09Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Deep Learning for Material Decomposition in Photon-Counting CT [0.5801044612920815]
We present a novel deep-learning solution for material decomposition in PCCT, based on an unrolled/unfolded iterative network.
Our approach outperforms a maximum likelihood estimation, a variational method, as well as a fully-learned network.
arXiv Detail & Related papers (2022-08-05T19:05:16Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Image Synthesis for Data Augmentation in Medical CT using Deep
Reinforcement Learning [31.677682150726383]
We show that our method bears high promise for generating novel and anatomically accurate high resolution CT images at large and diverse quantities.
Our approach is specifically designed to work with even small image datasets which is desirable given the often low amount of image data many researchers have available to them.
arXiv Detail & Related papers (2021-03-18T19:47:11Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Self-Supervised Training For Low Dose CT Reconstruction [0.0]
This study defines a training scheme to use low-dose sinograms as their own training targets.
We apply the self-supervision principle in the projection domain where the noise is element-wise independent.
We demonstrate that our method outperforms both conventional and compressed sensing based iterative reconstruction methods.
arXiv Detail & Related papers (2020-10-25T22:02:14Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.