Comprehensive Validation of Automated Whole Body Skeletal Muscle,
Adipose Tissue, and Bone Segmentation from 3D CT images for Body Composition
Analysis: Towards Extended Body Composition
- URL: http://arxiv.org/abs/2106.00652v2
- Date: Thu, 3 Jun 2021 07:14:35 GMT
- Title: Comprehensive Validation of Automated Whole Body Skeletal Muscle,
Adipose Tissue, and Bone Segmentation from 3D CT images for Body Composition
Analysis: Towards Extended Body Composition
- Authors: Da Ma, Vincent Chow, Karteek Popuri, Mirza Faisal Beg
- Abstract summary: Powerful tools of artificial intelligence such as deep learning are making it feasible now to segment the entire 3D image and generate accurate measurements of all internal anatomy.
These will enable the overcoming of the severe bottleneck that existed previously, namely, the need for manual segmentation.
These measurements were hitherto unavailable thereby limiting the field to a very small and limited subset.
- Score: 0.6176955945418618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest advances in computer-assisted precision medicine are making it
feasible to move from population-wide models that are useful to discover
aggregate patterns that hold for group-based analysis to patient-specific
models that can drive patient-specific decisions with regard to treatment
choices, and predictions of outcomes of treatment. Body Composition is
recognized as an important driver and risk factor for a wide variety of
diseases, as well as a predictor of individual patient-specific clinical
outcomes to treatment choices or surgical interventions. 3D CT images are
routinely acquired in the oncological worklows and deliver accurate rendering
of internal anatomy and therefore can be used opportunistically to assess the
amount of skeletal muscle and adipose tissue compartments. Powerful tools of
artificial intelligence such as deep learning are making it feasible now to
segment the entire 3D image and generate accurate measurements of all internal
anatomy. These will enable the overcoming of the severe bottleneck that existed
previously, namely, the need for manual segmentation, which was prohibitive to
scale to the hundreds of 2D axial slices that made up a 3D volumetric image.
Automated tools such as presented here will now enable harvesting whole-body
measurements from 3D CT or MRI images, leading to a new era of discovery of the
drivers of various diseases based on individual tissue, organ volume, shape,
and functional status. These measurements were hitherto unavailable thereby
limiting the field to a very small and limited subset. These discoveries and
the potential to perform individual image segmentation with high speed and
accuracy are likely to lead to the incorporation of these 3D measures into
individual specific treatment planning models related to nutrition, aging,
chemotoxicity, surgery and survival after the onset of a major disease such as
cancer.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - SAM3D: Zero-Shot Semi-Automatic Segmentation in 3D Medical Images with the Segment Anything Model [3.2554912675000818]
We introduce SAM3D, a new approach to semi-automatic zero-shot segmentation of 3D images building on the existing Segment Anything Model.
We achieve fast and accurate segmentations in 3D images with a four-step strategy involving: user prompting with 3D polylines, volume slicing along multiple axes, slice-wide inference with a pretrained model, and recomposition and refinement in 3D.
arXiv Detail & Related papers (2024-05-10T19:26:17Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - 3D Vertebrae Measurements: Assessing Vertebral Dimensions in Human Spine
Mesh Models Using Local Anatomical Vertebral Axes [0.4499833362998489]
We introduce a novel, fully automated method for measuring vertebral morphology using 3D meshes of lumbar and thoracic spine models.
Our experimental results demonstrate the method's capability to accurately measure low-resolution patient-specific vertebral meshes with mean absolute error (MAE) of 1.09 mm.
Our qualitative analysis indicates that measurements obtained using our method on 3D spine models can be accurately reprojected back onto the original medical images if these images are available.
arXiv Detail & Related papers (2024-02-02T14:52:41Z) - Weakly Supervised AI for Efficient Analysis of 3D Pathology Samples [6.381153836752796]
We present Modality-Agnostic Multiple instance learning for volumetric Block Analysis (MAMBA) for processing 3D tissue images.
With the 3D block-based approach, MAMBA achieves an area under the receiver operating characteristic curve (AUC) of 0.86 and 0.74, superior to 2D traditional single-slice-based prognostication.
Further analyses reveal that the incorporation of greater tissue volume improves prognostic performance and mitigates risk prediction variability from sampling bias.
arXiv Detail & Related papers (2023-07-27T14:48:02Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - BOSS: Bones, Organs and Skin Shape Model [10.50175010474078]
We propose a deformable human shape and pose model that combines skin, internal organs, and bones, learned from CT images.
By modeling the statistical variations in a pose-normalized space using probabilistic PCA, our approach offers a holistic representation of the body.
arXiv Detail & Related papers (2023-03-08T22:31:24Z) - Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging [14.544274849288952]
We propose a 3D whole body imaging prototype to enable rapid evaluation and mapping of skin lesions.
A modular camera rig is designed to automatically capture synchronised images from multiple angles for entire body scanning.
We develop algorithms for 3D body image reconstruction, data processing and skin lesion detection based on deep convolutional neural networks.
arXiv Detail & Related papers (2022-05-14T15:24:06Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - iPhantom: a framework for automated creation of individualized
computational phantoms and its application to CT organ dosimetry [58.943644554192936]
This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or digital-twins.
The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients.
iPhantom precisely predicted all organ locations with good accuracy of Dice Similarity Coefficients (DSC) >0.6 for anchor organs and DSC of 0.3-0.9 for all other organs.
arXiv Detail & Related papers (2020-08-20T01:50:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.