Body Composition Assessment with Limited Field-of-view Computed
Tomography: A Semantic Image Extension Perspective
- URL: http://arxiv.org/abs/2207.06551v2
- Date: Sun, 16 Apr 2023 00:40:40 GMT
- Title: Body Composition Assessment with Limited Field-of-view Computed
Tomography: A Semantic Image Extension Perspective
- Authors: Kaiwen Xu, Thomas Li, Mirza S. Khan, Riqiang Gao, Sanja L. Antic,
Yuankai Huo, Kim L. Sandler, Fabien Maldonado, Bennett A. Landman
- Abstract summary: Field-of-view (FOV) tissue truncation beyond the lungs is common in routine lung screening computed tomography (CT)
In this work, we formulate the problem from the semantic image extension perspective which only requires image data as inputs.
The proposed two-stage method identifies a new FOV border based on the estimated extent of the complete body and imputes missing tissues in the truncated region.
- Score: 5.373119949253442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Field-of-view (FOV) tissue truncation beyond the lungs is common in routine
lung screening computed tomography (CT). This poses limitations for
opportunistic CT- based body composition (BC) assessment as key anatomical
structures are missing. Traditionally, extending the FOV of CT is considered as
a CT reconstruction problem using limited data. However, this approach relies
on the projection domain data which might not be available in application. In
this work, we formulate the problem from the semantic image extension
perspective which only requires image data as inputs. The proposed two-stage
method identifies a new FOV border based on the estimated extent of the
complete body and imputes missing tissues in the truncated region. The training
samples are simulated using CT slices with complete body in FOV, making the
model development self-supervised. We evaluate the validity of the proposed
method in automatic BC assessment using lung screening CT with limited FOV. The
proposed method effectively restores the missing tissues and reduces BC
assessment error introduced by FOV tissue truncation. In the BC assessment for
a large-scale lung screening CT dataset, this correction improves both the
intra-subject consistency and the correlation with anthropometric
approximations. The developed method is available at
https://github.com/MASILab/S-EFOV.
Related papers
- Diffusion-based Generative Image Outpainting for Recovery of FOV-Truncated CT Images [10.350643783811174]
Field-of-view (FOV) recovery of truncated chest CT scans is crucial for accurate body composition analysis.
We present a method for recovering truncated CT slices using generative image outpainting.
Our model reliably recovers the truncated anatomy and outperforms the previous state-of-the-art despite being trained on 87% less data.
arXiv Detail & Related papers (2024-06-07T09:15:29Z) - Solving Low-Dose CT Reconstruction via GAN with Local Coherence [2.325977856241404]
We propose a novel approach using generative adversarial networks (GANs) with enhanced local coherence.
The proposed method can capture the local coherence of adjacent images by optical flow, which yields significant improvements in the precision and stability of the constructed images.
arXiv Detail & Related papers (2023-09-24T08:55:42Z) - Thoracic Cartilage Ultrasound-CT Registration using Dense Skeleton Graph [49.11220791279602]
It is challenging to accurately map planned paths from a generic atlas to individual patients, particularly for thoracic applications.
A graph-based non-rigid registration is proposed to enable transferring planned paths from the atlas to the current setup.
arXiv Detail & Related papers (2023-07-07T18:57:21Z) - Zero-shot CT Field-of-view Completion with Unconditional Generative
Diffusion Prior [4.084687005614829]
Anatomically consistent field-of-view (FOV) completion to recover truncated body sections has important applications in quantitative analyses of computed tomography (CT) with limited FOV.
Existing solution based on conditional generative models relies on the fidelity of synthetic truncation patterns at training phase, which poses limitations for the generalizability of the method to potential unknown types of truncation.
In this study, we evaluate a zero-shot method based on a pretrained unconditional generative diffusion prior, where truncation pattern with arbitrary forms can be specified at inference phase.
arXiv Detail & Related papers (2023-04-07T17:54:40Z) - Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation [3.88838725116957]
Anatomical structures in contrast-enhanced CT (ceCT) images can be challenging to segment due to variability in contrast medium diffusion.
To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it.
CycleGAN has attracted particular attention because it alleviates the need for paired data.
We present an extension of CycleGAN to generate high fidelity images, with good structural consistency.
arXiv Detail & Related papers (2022-10-04T16:14:49Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Fibrosis-Net: A Tailored Deep Convolutional Neural Network Design for
Prediction of Pulmonary Fibrosis Progression from Chest CT Images [59.622239796473885]
Pulmonary fibrosis is a chronic lung disease that causes irreparable lung tissue scarring and damage, resulting in progressive loss in lung capacity and no known cure.
We introduce Fibrosis-Net, a deep convolutional neural network design tailored for the prediction of pulmonary fibrosis progression from chest CT images.
arXiv Detail & Related papers (2021-03-06T02:16:41Z) - Deep Residual 3D U-Net for Joint Segmentation and Texture Classification
of Nodules in Lung [91.3755431537592]
We present a method for lung nodules segmentation, their texture classification and subsequent follow-up recommendation from the CT image of lung.
Our method consists of neural network model based on popular U-Net architecture family but modified for the joint nodule segmentation and its texture classification tasks and an ensemble-based model for the follow-up recommendation.
arXiv Detail & Related papers (2020-06-25T07:20:41Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.