SS-CTML: Self-Supervised Cross-Task Mutual Learning for CT Image Reconstruction
- URL: http://arxiv.org/abs/2501.01456v1
- Date: Tue, 31 Dec 2024 04:32:46 GMT
- Title: SS-CTML: Self-Supervised Cross-Task Mutual Learning for CT Image Reconstruction
- Authors: Gaofeng Chen, Yaoduo Zhang, Li Huang, Pengfei Wang, Wenyu Zhang, Dong Zeng, Jianhua Ma, Ji He,
- Abstract summary: Supervised deep-learning (SDL) techniques with paired training datasets have been widely studied for X-ray computed tomography (CT) image reconstruction.
We propose a self-supervised cross-task mutual learning (SS-CTML) framework for CT image reconstruction.
- Score: 14.197733437855169
- License:
- Abstract: Supervised deep-learning (SDL) techniques with paired training datasets have been widely studied for X-ray computed tomography (CT) image reconstruction. However, due to the difficulties of obtaining paired training datasets in clinical routine, the SDL methods are still away from common uses in clinical practices. In recent years, self-supervised deep-learning (SSDL) techniques have shown great potential for the studies of CT image reconstruction. In this work, we propose a self-supervised cross-task mutual learning (SS-CTML) framework for CT image reconstruction. Specifically, a sparse-view scanned and a limited-view scanned sinogram data are first extracted from a full-view scanned sinogram data, which results in three individual reconstruction tasks, i.e., the full-view CT (FVCT) reconstruction, the sparse-view CT (SVCT) reconstruction, and limited-view CT (LVCT) reconstruction. Then, three neural networks are constructed for the three reconstruction tasks. Considering that the ultimate goals of the three tasks are all to reconstruct high-quality CT images, we therefore construct a set of cross-task mutual learning objectives for the three tasks, in which way, the three neural networks can be self-supervised optimized by learning from each other. Clinical datasets are adopted to evaluate the effectiveness of the proposed framework. Experimental results demonstrate that the SS-CTML framework can obtain promising CT image reconstruction performance in terms of both quantitative and qualitative measurements.
Related papers
- Abnormality-Driven Representation Learning for Radiology Imaging [0.8321462983924758]
We introduce lesion-enhanced contrastive learning (LeCL), a novel approach to obtain visual representations driven by abnormalities in 2D axial slices across different locations of the CT scans.
We evaluate our approach across three clinical tasks: tumor lesion location, lung disease detection, and patient staging, benchmarking against four state-of-the-art foundation models.
arXiv Detail & Related papers (2024-11-25T13:53:26Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - C^2RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction [17.54830070112685]
Cone beam computed tomography (CBCT) is an important imaging technology widely used in medical scenarios.
CBCT reconstruction is more challenging due to the increased dimensionality caused by the measurement process based on cone-shaped X-ray beams.
We propose C2RV by leveraging explicit multi-scale volumetric representations to enable cross-regional learning in the 3D space.
arXiv Detail & Related papers (2024-06-06T09:37:56Z) - Enhancing Low-dose CT Image Reconstruction by Integrating Supervised and
Unsupervised Learning [13.17680480211064]
We propose a hybrid supervised-unsupervised learning framework for X-ray computed tomography (CT) image reconstruction.
Each proposed trained block consists of a deterministic MBIR solver and a neural network.
We demonstrate the efficacy of this learned hybrid model for low-dose CT image reconstruction with limited training data.
arXiv Detail & Related papers (2023-11-19T20:23:59Z) - Towards Head Computed Tomography Image Reconstruction Standardization
with Deep Learning Assisted Automatic Detection [5.288684776927016]
Three-dimensional (3D) reconstruction of head Computed Tomography (CT) images elucidates the intricate spatial relationships of tissue structures.
securing an optimal head CT scan without deviation is challenging in clinical settings, owing to poor positioning by technicians, patient's physical constraints, or CT scanner tilt angle restrictions.
We propose an efficient automatic head CT images 3D reconstruction method, improving accuracy and repeatability, as well as diminishing manual intervention.
arXiv Detail & Related papers (2023-07-31T06:58:49Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - 3D helical CT Reconstruction with a Memory Efficient Learned Primal-Dual
Architecture [1.3518297878940662]
This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so it can be trained and applied to reconstruction in this setting.
It is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data.
arXiv Detail & Related papers (2022-05-24T10:32:32Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.