Unsupervised Low-dose CT Reconstruction with One-way Conditional Normalizing Flows
- URL: http://arxiv.org/abs/2410.17543v1
- Date: Wed, 23 Oct 2024 03:47:29 GMT
- Title: Unsupervised Low-dose CT Reconstruction with One-way Conditional Normalizing Flows
- Authors: Ran An, Ke Chen, Hongwei Li,
- Abstract summary: We propose a novel CNFs-based unsupervised LDCT iterative reconstruction algorithm.
It employs strict one-way transformation when performing alternating optimization in the dual spaces.
Experiments on different datasets suggest that the performance of the proposed algorithm could surpass some state-of-the-art unsupervised and even supervised methods.
- Score: 7.727981333251735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning methods have shown promising performance for low-dose computed tomography (LDCT) reconstruction. However, supervised methods face the problem of lacking labeled data in clinical scenarios, and the CNN-based unsupervised denoising methods would cause excessive smoothing in the reconstructed image. Recently, the normalizing flows (NFs) based methods have shown advantages in producing detail-rich images and avoiding over-smoothing, however, there are still issues: (1) Although the alternating optimization in the data and latent space can well utilize the regularization and generation capabilities of NFs, the current two-way transformation strategy of noisy images and latent variables would cause detail loss and secondary artifacts; and (2) Training NFs on high-resolution CT images is hard due to huge computation. Though using conditional normalizing flows (CNFs) to learn conditional probability can reduce the computational burden, current methods require labeled data for conditionalization, and the unsupervised CNFs-based LDCT reconstruction remains a problem. To tackle these problems, we propose a novel CNFs-based unsupervised LDCT iterative reconstruction algorithm. It employs strict one-way transformation when performing alternating optimization in the dual spaces, thus effectively avoiding the problems of detail loss and secondary artifacts. By proposing a novel unsupervised conditionalization strategy, we train CNFs on high-resolution CT images, thus achieving fast and high-quality unsupervised reconstruction. Experiments on different datasets suggest that the performance of the proposed algorithm could surpass some state-of-the-art unsupervised and even supervised methods.
Related papers
- Resolution-Independent Neural Operators for Multi-Rate Sparse-View CT [67.14700058302016]
Deep learning methods achieve high-fidelity reconstructions but often overfit to a fixed acquisition setup.<n>We propose Computed Tomography neural Operator (CTO), a unified CT reconstruction framework that extends to continuous function space.<n>CTO enables consistent multi-sampling-rate and cross-resolution performance, with on average >4dB PSNR gain over CNNs.
arXiv Detail & Related papers (2025-12-13T08:31:46Z) - Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction [5.107409624991683]
TUnable-geneRalizatioN Diffusion (TurnDiff) is powered by self-supervised contextual sub-data for low-dose CT reconstruction.<n>TurnDiff consistently outperforms state-of-the-art methods in both reconstruction and generalization.
arXiv Detail & Related papers (2025-09-28T13:50:29Z) - Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.
Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.
We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - Unsupervised Self-Prior Embedding Neural Representation for Iterative Sparse-View CT Reconstruction [8.291709892315993]
We introduce Self-prior embedding neural representation (Spener) for sparse-view computed tomography (SVCT) inverse problems.
During each iteration, Spener extracts local image prior features from the previous iteration and embeds them to constrain the solution space.
Experimental results on multiple CT datasets show that our unsupervised Spener method achieves performance comparable to supervised state-of-the-art (SOTA) methods on in-domain data.
arXiv Detail & Related papers (2025-02-08T04:36:00Z) - A Low-dose CT Reconstruction Network Based on TV-regularized OSEM Algorithm [10.204918070701211]
Low-dose computed tomography (LDCT) offers significant advantages in reducing the potential harm to human bodies.
By utilizing the expectation (EM) algorithm, statistical priors could be combined with artificial priors to improve LDCT reconstruction quality.
In this paper, we propose to integrate TV regularization into the M''-step of the EM algorithm, thus achieving effective and efficient regularization.
arXiv Detail & Related papers (2024-08-25T13:31:53Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - NF-ULA: Langevin Monte Carlo with Normalizing Flow Prior for Imaging
Inverse Problems [7.38079566297881]
We introduce NF-ULA (Normalizing Flow-based Unadjusted Langevin algorithm), which involves learning a normalizing flow (NF) as the image prior.
NF-ULA is found to perform better than competing methods for severely ill-posed inverse problems.
arXiv Detail & Related papers (2023-04-17T15:03:45Z) - LRIP-Net: Low-Resolution Image Prior based Network for Limited-Angle CT
Reconstruction [5.796842150589423]
We build up a low-resolution reconstruction problem on the down-sampled projection data.
We use the reconstructed low-resolution image as prior knowledge for the original limited-angle CT problem.
arXiv Detail & Related papers (2022-07-30T13:03:20Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.