Can point cloud networks learn statistical shape models of anatomies?
- URL: http://arxiv.org/abs/2305.05610v2
- Date: Thu, 20 Jul 2023 16:46:36 GMT
- Title: Can point cloud networks learn statistical shape models of anatomies?
- Authors: Jadie Adams and Shireen Elhabian
- Abstract summary: We show that point cloud encoder-decoder-based completion networks can provide an untapped potential for Statistical Shape Modeling.
Our work paves the way for further exploration of point cloud deep learning for SSM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical Shape Modeling (SSM) is a valuable tool for investigating and
quantifying anatomical variations within populations of anatomies. However,
traditional correspondence-based SSM generation methods have a prohibitive
inference process and require complete geometric proxies (e.g., high-resolution
binary volumes or surface meshes) as input shapes to construct the SSM.
Unordered 3D point cloud representations of shapes are more easily acquired
from various medical imaging practices (e.g., thresholded images and surface
scanning). Point cloud deep networks have recently achieved remarkable success
in learning permutation-invariant features for different point cloud tasks
(e.g., completion, semantic segmentation, classification). However, their
application to learning SSM from point clouds is to-date unexplored. In this
work, we demonstrate that existing point cloud encoder-decoder-based completion
networks can provide an untapped potential for SSM, capturing population-level
statistical representations of shapes while reducing the inference burden and
relaxing the input requirement. We discuss the limitations of these techniques
to the SSM application and suggest future improvements. Our work paves the way
for further exploration of point cloud deep learning for SSM, a promising
avenue for advancing shape analysis literature and broadening SSM to diverse
use cases.
Related papers
- Hierarchical Feature Learning for Medical Point Clouds via State Space Model [5.086862917025204]
This paper presents an SSM-based hierarchical feature learning framework for medical point cloud understanding.
To assist SSM in processing point clouds, we introduce coordinate-order and inside-out scanning strategies.
To evaluate the proposed method, we build a large-scale medical point cloud dataset named MedPointS.
arXiv Detail & Related papers (2025-04-17T15:22:31Z) - Bridging Domain Gap of Point Cloud Representations via Self-Supervised Geometric Augmentation [15.881442863961531]
We introduce a novel scheme for induced geometric invariance of point cloud representations across domains.
On one hand, a novel pretext task of predicting translation of distances of augmented samples is proposed to alleviate centroid shift of point clouds.
On the other hand, we pioneer an integration of the relational self-supervised learning on geometrically-augmented point clouds.
arXiv Detail & Related papers (2024-09-11T02:39:19Z) - Training-Free Point Cloud Recognition Based on Geometric and Semantic Information Fusion [18.588413607753278]
We propose a training-free method that integrates both geometric and semantic features.
Our method outperforms existing state-of-the-art training-free approaches on mainstream benchmark datasets.
arXiv Detail & Related papers (2024-09-07T08:20:02Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images [1.9029890402585894]
We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space.
Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks.
arXiv Detail & Related papers (2024-03-16T20:16:37Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Point2SSM: Learning Morphological Variations of Anatomies from Point
Cloud [5.874142059884521]
We present Point2SSM, a novel unsupervised learning approach for constructing correspondence-based statistical shape models (SSMs) directly from raw point clouds.
SSM is crucial in clinical research, enabling population-level analysis of morphological variation in bones and organs.
arXiv Detail & Related papers (2023-05-23T19:36:24Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models [4.608133071225539]
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images.
DeepSSM aims to provide a blueprint for deep learning-based image-to-shape models.
arXiv Detail & Related papers (2021-10-14T04:52:37Z) - Spatial-Temporal Multi-Cue Network for Continuous Sign Language
Recognition [141.24314054768922]
We propose a spatial-temporal multi-cue (STMC) network to solve the vision-based sequence learning problem.
To validate the effectiveness, we perform experiments on three large-scale CSLR benchmarks.
arXiv Detail & Related papers (2020-02-08T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.