Discriminative and Generative Models for Anatomical Shape Analysison
Point Clouds with Deep Neural Networks
- URL: http://arxiv.org/abs/2010.00820v1
- Date: Fri, 2 Oct 2020 07:37:40 GMT
- Title: Discriminative and Generative Models for Anatomical Shape Analysison
Point Clouds with Deep Neural Networks
- Authors: Benjamin Gutierrez Becker, Ignacio Sarasua, Christian Wachinger
- Abstract summary: We introduce deep neural networks for the analysis of anatomical shapes that learn a low-dimensional shape representation from the given task.
Our framework is modular and consists of several computing blocks that perform fundamental shape processing tasks.
We propose a discriminative model for disease classification and age regression, as well as a generative model for the accruate reconstruction of shapes.
- Score: 3.7814216736076434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce deep neural networks for the analysis of anatomical shapes that
learn a low-dimensional shape representation from the given task, instead of
relying on hand-engineered representations. Our framework is modular and
consists of several computing blocks that perform fundamental shape processing
tasks. The networks operate on unordered point clouds and provide invariance to
similarity transformations, avoiding the need to identify point correspondences
between shapes. Based on the framework, we assemble a discriminative model for
disease classification and age regression, as well as a generative model for
the accruate reconstruction of shapes. In particular, we propose a conditional
generative model, where the condition vector provides a mechanism to control
the generative process. instance, it enables to assess shape variations
specific to a particular diagnosis, when passing it as side information. Next
to working on single shapes, we introduce an extension for the joint analysis
of multiple anatomical structures, where the simultaneous modeling of multiple
structures can lead to a more compact encoding and a better understanding of
disorders. We demonstrate the advantages of our framework in comprehensive
experiments on real and synthetic data. The key insights are that (i) learning
a shape representation specific to the given task yields higher performance
than alternative shape descriptors, (ii) multi-structure analysis is both more
efficient and more accurate than single-structure analysis, and (iii) point
clouds generated by our model capture morphological differences associated to
Alzheimers disease, to the point that they can be used to train a
discriminative model for disease classification. Our framework naturally scales
to the analysis of large datasets, giving it the potential to learn
characteristic variations in large populations.
Related papers
- An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - ReshapeIT: Reliable Shape Interaction with Implicit Template for Anatomical Structure Reconstruction [59.971808117043366]
ReShapeIT represents an anatomical structure with an implicit template field shared within the same category.
It ensures the implicit template field generates valid templates by strengthening the constraint of the correspondence between the instance shape and the template shape.
A template Interaction Module is introduced to reconstruct unseen shapes by interacting the valid template shapes with the instance-wise latent codes.
arXiv Detail & Related papers (2023-12-11T07:09:32Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - A Generative Shape Compositional Framework to Synthesise Populations of
Virtual Chimaeras [52.33206865588584]
We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets.
We build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures.
Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity.
arXiv Detail & Related papers (2022-10-04T13:36:52Z) - Landmark-free Statistical Shape Modeling via Neural Flow Deformations [0.5897108307012394]
We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances.
Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver.
arXiv Detail & Related papers (2022-09-14T18:17:19Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.