Weakly SSM : On the Viability of Weakly Supervised Segmentations for Statistical Shape Modeling
- URL: http://arxiv.org/abs/2407.15260v1
- Date: Sun, 21 Jul 2024 20:24:21 GMT
- Title: Weakly SSM : On the Viability of Weakly Supervised Segmentations for Statistical Shape Modeling
- Authors: Janmesh Ukey, Tushar Kataria, Shireen Y. Elhabian,
- Abstract summary: Statistical Shape Models (SSMs) excel at identifying population level anatomical variations.
SSMs are often constrained by the necessity for expert-driven manual segmentation.
Recent deep learning approaches enable the direct estimation of SSMs from unsegmented images.
- Score: 1.9029890402585894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical Shape Models (SSMs) excel at identifying population level anatomical variations, which is at the core of various clinical and biomedical applications, including morphology-based diagnostics and surgical planning. However, the effectiveness of SSM is often constrained by the necessity for expert-driven manual segmentation, a process that is both time-intensive and expensive, thereby restricting their broader application and utility. Recent deep learning approaches enable the direct estimation of Statistical Shape Models (SSMs) from unsegmented images. While these models can predict SSMs without segmentation during deployment, they do not address the challenge of acquiring the manual annotations needed for training, particularly in resource-limited settings. Semi-supervised and foundation models for anatomy segmentation can mitigate the annotation burden. Yet, despite the abundance of available approaches, there are no established guidelines to inform end-users on their effectiveness for the downstream task of constructing SSMs. In this study, we systematically evaluate the potential of weakly supervised methods as viable alternatives to manual segmentation's for building SSMs. We establish a new performance benchmark by employing various semi-supervised and foundational model methods for anatomy segmentation under low annotation settings, utilizing the predicted segmentation's for the task of SSM. We compare the modes of shape variation and use quantitative metrics to compare against a shape model derived from a manually annotated dataset. Our results indicate that some methods produce noisy segmentation, which is very unfavorable for SSM tasks, while others can capture the correct modes of variations in the population cohort with 60-80\% reduction in required manual annotation.
Related papers
- DiM: $f$-Divergence Minimization Guided Sharpness-Aware Optimization for Semi-supervised Medical Image Segmentation [8.70112307145508]
We propose a sharpness-aware optimization method based on $f$-divergence minimization.
This method enhances the model's stability by fine-tuning the sensitivity of model parameters.
It also improves the model's adaptability to different datasets through the introduction of $f$-divergence.
arXiv Detail & Related papers (2024-11-19T09:07:26Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Weakly Supervised Bayesian Shape Modeling from Unsegmented Medical Images [4.424170214926035]
Correspondence-based statistical shape modeling (SSM) facilitates population-level morphometrics.
Recent advancements in deep learning have streamlined this process in inference.
We introduce a weakly supervised deep learning approach to predict SSM from images using point cloud supervision.
arXiv Detail & Related papers (2024-05-15T20:47:59Z) - MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images [1.9029890402585894]
We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space.
Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks.
arXiv Detail & Related papers (2024-03-16T20:16:37Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - Instance-specific and Model-adaptive Supervision for Semi-supervised
Semantic Segmentation [49.82432158155329]
We propose an instance-specific and model-adaptive supervision for semi-supervised semantic segmentation, named iMAS.
iMAS learns from unlabeled instances progressively by weighing their corresponding consistency losses based on the evaluated hardness.
arXiv Detail & Related papers (2022-11-21T10:37:28Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Benchmarking off-the-shelf statistical shape modeling tools in clinical
applications [53.47202621511081]
We systematically assess the outcome of widely used, state-of-the-art SSM tools.
We propose validation frameworks for anatomical landmark/measurement inference and lesion screening.
ShapeWorks and Deformetrica shape models are found to capture clinically relevant population-level variability.
arXiv Detail & Related papers (2020-09-07T03:51:35Z) - Semi-supervised Pathology Segmentation with Disentangled Representations [10.834978793226444]
We propose Anatomy-Pathology Disentanglement Network (APD-Net), a pathology segmentation model that attempts to learn jointly for the first time.
APD-Net can perform pathology segmentation with few annotations, maintain performance with different amounts of supervision, and outperform related deep learning methods.
arXiv Detail & Related papers (2020-09-05T17:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.