Learning Population-level Shape Statistics and Anatomy Segmentation From
Images: A Joint Deep Learning Model
- URL: http://arxiv.org/abs/2201.03481v1
- Date: Mon, 10 Jan 2022 17:24:35 GMT
- Title: Learning Population-level Shape Statistics and Anatomy Segmentation From
Images: A Joint Deep Learning Model
- Authors: Wenzheng Tao, Riddhish Bhalodia, Shireen Elhabian
- Abstract summary: Point distribution models (PDMs) represent the anatomical surface via a dense set of correspondences.
We propose a deep-learning-based framework that simultaneously learns these two coordinate spaces directly from the volumetric images.
The proposed joint model serves a dual purpose; the world correspondences can directly be used for shape analysis applications, circumventing the heavy pre-processing and segmentation involved in traditional PDM models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical shape modeling is an essential tool for the quantitative analysis
of anatomical populations. Point distribution models (PDMs) represent the
anatomical surface via a dense set of correspondences, an intuitive and
easy-to-use shape representation for subsequent applications. These
correspondences are exhibited in two coordinate spaces: the local coordinates
describing the geometrical features of each individual anatomical surface and
the world coordinates representing the population-level statistical shape
information after removing global alignment differences across samples in the
given cohort. We propose a deep-learning-based framework that simultaneously
learns these two coordinate spaces directly from the volumetric images. The
proposed joint model serves a dual purpose; the world correspondences can
directly be used for shape analysis applications, circumventing the heavy
pre-processing and segmentation involved in traditional PDM models.
Additionally, the local correspondences can be used for anatomy segmentation.
We demonstrate the efficacy of this joint model for both shape modeling
applications on two datasets and its utility in inferring the anatomical
surface.
Related papers
- An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - TailorMe: Self-Supervised Learning of an Anatomically Constrained
Volumetric Human Shape Model [4.474107938692397]
Human shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks.
We create an anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database.
This data is then used to learn an anthropo constrained volumetric human shape model in a self-supervised fashion.
arXiv Detail & Related papers (2023-11-03T07:42:19Z) - Image2SSM: Reimagining Statistical Shape Models from Images with Radial
Basis Functions [4.422330219605964]
We propose Image2SSM, a novel deep-learning-based approach for statistical shape modeling.
Image2SSM learns a radial-basis-function (RBF)-based representation of shapes directly from images.
It can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes.
arXiv Detail & Related papers (2023-05-19T18:08:10Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - A Generative Shape Compositional Framework to Synthesise Populations of
Virtual Chimaeras [52.33206865588584]
We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets.
We build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures.
Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity.
arXiv Detail & Related papers (2022-10-04T13:36:52Z) - Statistical Shape Modeling of Biventricular Anatomy with Shared
Boundaries [16.287876512923084]
This paper presents a general and flexible data-driven approach for building statistical shape models of multi-organ anatomies with shared boundaries.
Shape changes within these shared boundaries of the heart can indicate potential pathological changes that lead to uncoordinated contraction and poor end-organ perfusion.
arXiv Detail & Related papers (2022-09-06T15:54:37Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Learning Deep Features for Shape Correspondence with Domain Invariance [10.230933226423984]
Correspondence-based shape models are key to various medical imaging applications that rely on a statistical analysis of anatomies.
This paper proposes an automated feature learning approach, using deep convolutional neural networks to extract correspondence-friendly features from shape ensembles.
arXiv Detail & Related papers (2021-02-21T02:25:32Z) - Benchmarking off-the-shelf statistical shape modeling tools in clinical
applications [53.47202621511081]
We systematically assess the outcome of widely used, state-of-the-art SSM tools.
We propose validation frameworks for anatomical landmark/measurement inference and lesion screening.
ShapeWorks and Deformetrica shape models are found to capture clinically relevant population-level variability.
arXiv Detail & Related papers (2020-09-07T03:51:35Z) - What and Where: Modeling Skeletons from Semantic and Spatial
Perspectives for Action Recognition [46.836815779215456]
We propose to model skeletons from a novel spatial perspective, from which the model takes the spatial location as prior knowledge to group human joints.
From the semantic perspective, we propose a Transformer-like network that is expert in modeling joint correlations.
From the spatial perspective, we transform the skeleton data into the sparse format for efficient feature extraction.
arXiv Detail & Related papers (2020-04-07T10:53:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.