DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models
- URL: http://arxiv.org/abs/2110.07152v1
- Date: Thu, 14 Oct 2021 04:52:37 GMT
- Title: DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models
- Authors: Riddhish Bhalodia, Shireen Elhabian, Jadie Adams, Wenzheng Tao,
Ladislav Kavan, Ross Whitaker
- Abstract summary: Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images.
DeepSSM aims to provide a blueprint for deep learning-based image-to-shape models.
- Score: 4.608133071225539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical shape modeling (SSM) characterizes anatomical variations in a
population of shapes generated from medical images. SSM requires consistent
shape representation across samples in shape cohort. Establishing this
representation entails a processing pipeline that includes anatomy
segmentation, re-sampling, registration, and non-linear optimization. These
shape representations are then used to extract low-dimensional shape
descriptors that facilitate subsequent analyses in different applications.
However, the current process of obtaining these shape descriptors from imaging
data relies on human and computational resources, requiring domain expertise
for segmenting anatomies of interest. Moreover, this same taxing pipeline needs
to be repeated to infer shape descriptors for new image data using a
pre-trained/existing shape model. Here, we propose DeepSSM, a deep
learning-based framework for learning the functional mapping from images to
low-dimensional shape descriptors and their associated shape representations,
thereby inferring statistical representation of anatomy directly from 3D
images. Once trained using an existing shape model, DeepSSM circumvents the
heavy and manual pre-processing and segmentation and significantly improves the
computational time, making it a viable solution for fully end-to-end SSM
applications. In addition, we introduce a model-based data-augmentation
strategy to address data scarcity. Finally, this paper presents and analyzes
two different architectural variants of DeepSSM with different loss functions
using three medical datasets and their downstream clinical application.
Experiments showcase that DeepSSM performs comparably or better to the
state-of-the-art SSM both quantitatively and on application-driven downstream
tasks. Therefore, DeepSSM aims to provide a comprehensive blueprint for deep
learning-based image-to-shape models.
Related papers
- ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images [1.9029890402585894]
We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space.
Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks.
arXiv Detail & Related papers (2024-03-16T20:16:37Z) - Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models [4.972323953932128]
We propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models.
We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale.
Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives.
arXiv Detail & Related papers (2023-10-02T18:17:20Z) - ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images [0.8192907805418583]
This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation.
Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
arXiv Detail & Related papers (2023-07-06T20:21:12Z) - Image2SSM: Reimagining Statistical Shape Models from Images with Radial
Basis Functions [4.422330219605964]
We propose Image2SSM, a novel deep-learning-based approach for statistical shape modeling.
Image2SSM learns a radial-basis-function (RBF)-based representation of shapes directly from images.
It can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes.
arXiv Detail & Related papers (2023-05-19T18:08:10Z) - Leveraging Unsupervised Image Registration for Discovery of Landmark
Shape Descriptor [5.40076482533193]
This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis.
We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well.
The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images.
arXiv Detail & Related papers (2021-11-13T01:02:10Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Shape My Face: Registering 3D Face Scans by Surface-to-Surface
Translation [75.59415852802958]
Shape-My-Face (SMF) is a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model.
Our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets.
arXiv Detail & Related papers (2020-12-16T20:02:36Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.