X-ray Photon-Counting Data Correction through Deep Learning
- URL: http://arxiv.org/abs/2007.03119v1
- Date: Mon, 6 Jul 2020 23:29:16 GMT
- Title: X-ray Photon-Counting Data Correction through Deep Learning
- Authors: Mengzhou Li, David S. Rundle and Ge Wang
- Abstract summary: We propose a deep neural network based PCD data correction approach.
In this work, we first establish a complete simulation model incorporating the charge splitting and pulse pile-up effects.
The simulated PCD data and the ground truth counterparts are then fed to a specially designed deep adversarial network for PCD data correction.
- Score: 3.535670189300134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray photon-counting detectors (PCDs) are drawing an increasing attention in
recent years due to their low noise and energy discrimination capabilities. The
energy/spectral dimension associated with PCDs potentially brings great
benefits such as for material decomposition, beam hardening and metal artifact
reduction, as well as low-dose CT imaging. However, X-ray PCDs are currently
limited by several technical issues, particularly charge splitting (including
charge sharing and K-shell fluorescence re-absorption or escaping) and pulse
pile-up effects which distort the energy spectrum and compromise the data
quality. Correction of raw PCD measurements with hardware improvement and
analytic modeling is rather expensive and complicated. Hence, here we proposed
a deep neural network based PCD data correction approach which directly maps
imperfect data to the ideal data in the supervised learning mode. In this work,
we first establish a complete simulation model incorporating the charge
splitting and pulse pile-up effects. The simulated PCD data and the ground
truth counterparts are then fed to a specially designed deep adversarial
network for PCD data correction. Next, the trained network is used to correct
separately generated PCD data. The test results demonstrate that the trained
network successfully recovers the ideal spectrum from the distorted measurement
within $\pm6\%$ relative error. Significant data and image fidelity
improvements are clearly observed in both projection and reconstruction
domains.
Related papers
- End-to-End Model-based Deep Learning for Dual-Energy Computed Tomography Material Decomposition [53.14236375171593]
We propose a deep learning procedure called End-to-End Material Decomposition (E2E-DEcomp) for quantitative material decomposition.
We show the effectiveness of the proposed direct E2E-DEcomp method on the AAPM spectral CT dataset.
arXiv Detail & Related papers (2024-06-01T16:20:59Z) - Partitioned Hankel-based Diffusion Models for Few-shot Low-dose CT Reconstruction [10.158713017984345]
We propose a few-shot low-dose CT reconstruction method using Partitioned Hankel-based Diffusion (PHD) models.
In the iterative reconstruction stage, an iterative differential equation solver is employed along with data consistency constraints to update the acquired projection data.
The results approximate those of normaldose counterparts, validating PHD model as an effective and practical model for reducing artifacts and noise while preserving image quality.
arXiv Detail & Related papers (2024-05-27T13:44:53Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Deep Few-view High-resolution Photon-counting Extremity CT at Halved Dose for a Clinical Trial [8.393536317952085]
We propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed in a New Zealand clinical trial.
We present a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and real-world data.
arXiv Detail & Related papers (2024-03-19T00:07:48Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Unsupervised denoising for sparse multi-spectral computed tomography [2.969056717104372]
We investigate the suitability of learning-based improvements to the challenging task of obtaining high-quality reconstructions from sparse measurements for a 64-channel PCD-CT.
We propose an unsupervised denoising and artefact removal approach by exploiting different filter functions in the reconstruction and an explicit coupling of spectral channels with the nuclear norm.
arXiv Detail & Related papers (2022-11-02T14:36:24Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Data augmentation for deep learning based accelerated MRI reconstruction
with limited data [46.44703053411933]
Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks.
To achieve state-of-the-art performance, training on large and diverse sets of images is considered critical.
We propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data.
arXiv Detail & Related papers (2021-06-28T19:08:46Z) - Scatter Correction in X-ray CT by Physics-Inspired Deep Learning [26.549671705231145]
A fundamental problem in X-ray Computed Tomography (CT) is the scatter due to interaction of photons with the imaged object.
Scatter correction methods can be divided into two categories: hardware-based; and software-based.
In this work, two novel physics-inspired deep-learning-based methods, PhILSCAT and OV-PhILSCAT, are proposed.
arXiv Detail & Related papers (2021-03-21T22:51:20Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.