MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images
- URL: http://arxiv.org/abs/2403.11008v2
- Date: Mon, 8 Jul 2024 18:46:16 GMT
- Title: MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images
- Authors: Janmesh Ukey, Tushar Kataria, Shireen Y. Elhabian,
- Abstract summary: We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space.
Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks.
- Score: 1.9029890402585894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical Shape Modeling (SSM) effectively analyzes anatomical variations within populations but is limited by the need for manual localization and segmentation, which relies on scarce medical expertise. Recent advances in deep learning have provided a promising approach that automatically generates statistical representations (as point distribution models or PDMs) from unsegmented images. Once trained, these deep learning-based models eliminate the need for manual segmentation for new subjects. Most deep learning methods still require manual pre-alignment of image volumes and bounding box specification around the target anatomy, leading to a partially manual inference process. Recent approaches facilitate anatomy localization but only estimate population-level statistical representations and cannot directly delineate anatomy in images. Additionally, they are limited to modeling a single anatomy. We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space. Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks. Estimating Statistical Shape Models (SSM) is a stronger task than segmentation, as it encodes a more robust statistical prior for the objects to be detected and delineated. MASSM allows for more accurate and comprehensive shape representations, surpassing the capabilities of traditional pixel-wise segmentation.
Related papers
- Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images [1.2179682412409507]
We propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data.
Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.
arXiv Detail & Related papers (2024-07-02T03:56:20Z) - MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep
Models for X-ray Images of Multiple Body Parts [63.30352394004674]
Multi-task Self-super-vised Continual Learning (MUSCLE) is a novel self-supervised pre-training pipeline for medical imaging tasks.
MUSCLE aggregates X-rays collected from multiple body parts for representation learning, and adopts a well-designed continual learning procedure.
We evaluate MUSCLE using 9 real-world X-ray datasets with various tasks, including pneumonia classification, skeletal abnormality classification, lung segmentation, and tuberculosis (TB) detection.
arXiv Detail & Related papers (2023-10-03T12:19:19Z) - ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images [0.8192907805418583]
This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation.
Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
arXiv Detail & Related papers (2023-07-06T20:21:12Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Generalizable multi-task, multi-domain deep segmentation of sparse
pediatric imaging datasets via multi-scale contrastive regularization and
multi-joint anatomical priors [0.41998444721319217]
We propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over multiple datasets.
We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints.
arXiv Detail & Related papers (2022-07-27T12:59:16Z) - DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models [4.608133071225539]
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images.
DeepSSM aims to provide a blueprint for deep learning-based image-to-shape models.
arXiv Detail & Related papers (2021-10-14T04:52:37Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.