Diffusion-based Generative Image Outpainting for Recovery of FOV-Truncated CT Images
- URL: http://arxiv.org/abs/2406.04769v1
- Date: Fri, 7 Jun 2024 09:15:29 GMT
- Title: Diffusion-based Generative Image Outpainting for Recovery of FOV-Truncated CT Images
- Authors: Michelle Espranita Liman, Daniel Rueckert, Florian J. Fintelmann, Philip Müller,
- Abstract summary: Field-of-view (FOV) recovery of truncated chest CT scans is crucial for accurate body composition analysis.
We present a method for recovering truncated CT slices using generative image outpainting.
Our model reliably recovers the truncated anatomy and outperforms the previous state-of-the-art despite being trained on 87% less data.
- Score: 10.350643783811174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Field-of-view (FOV) recovery of truncated chest CT scans is crucial for accurate body composition analysis, which involves quantifying skeletal muscle and subcutaneous adipose tissue (SAT) on CT slices. This, in turn, enables disease prognostication. Here, we present a method for recovering truncated CT slices using generative image outpainting. We train a diffusion model and apply it to truncated CT slices generated by simulating a small FOV. Our model reliably recovers the truncated anatomy and outperforms the previous state-of-the-art despite being trained on 87% less data.
Related papers
- Diffusion Models for Counterfactual Generation and Anomaly Detection in
Brain Images [59.85702949046042]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
We verify that when our method is applied to healthy samples, the input images are reconstructed without significant modifications.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation [4.43162303545687]
Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging.
procuring appropriate training data for these Super-Resolution (SR) models is challenging.
Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs.
We introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms.
arXiv Detail & Related papers (2023-07-02T11:09:08Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Zero-shot CT Field-of-view Completion with Unconditional Generative
Diffusion Prior [4.084687005614829]
Anatomically consistent field-of-view (FOV) completion to recover truncated body sections has important applications in quantitative analyses of computed tomography (CT) with limited FOV.
Existing solution based on conditional generative models relies on the fidelity of synthetic truncation patterns at training phase, which poses limitations for the generalizability of the method to potential unknown types of truncation.
In this study, we evaluate a zero-shot method based on a pretrained unconditional generative diffusion prior, where truncation pattern with arbitrary forms can be specified at inference phase.
arXiv Detail & Related papers (2023-04-07T17:54:40Z) - Body Composition Assessment with Limited Field-of-view Computed
Tomography: A Semantic Image Extension Perspective [5.373119949253442]
Field-of-view (FOV) tissue truncation beyond the lungs is common in routine lung screening computed tomography (CT)
In this work, we formulate the problem from the semantic image extension perspective which only requires image data as inputs.
The proposed two-stage method identifies a new FOV border based on the estimated extent of the complete body and imputes missing tissues in the truncated region.
arXiv Detail & Related papers (2022-07-13T23:19:22Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction [13.358197688568463]
iodine radiation in the imaging process induces irreversible injury.
Iterative models are proposed to alleviate the appeared artifacts in sparse-view CT images, but the cost is too expensive.
We propose textbfDual-textbfDomain textbfDuDoTrans to reconstruct CT image with both the enhanced and raw sinograms.
arXiv Detail & Related papers (2021-11-21T10:41:07Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Self-supervised Skull Reconstruction in Brain CT Images with
Decompressive Craniectomy [13.695197074035928]
We propose a deep learning based method to reconstruct the skull defect removed during craniectomy performed after TBI.
This reconstruction is useful in multiple scenarios, e.g. to support the creation of cranioplasty plates.
arXiv Detail & Related papers (2020-07-07T22:38:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.