Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models
- URL: http://arxiv.org/abs/2310.01529v1
- Date: Mon, 2 Oct 2023 18:17:20 GMT
- Title: Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models
- Authors: Abu Zahid Bin Aziz, Jadie Adams, Shireen Elhabian
- Abstract summary: We propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models.
We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale.
Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives.
- Score: 4.972323953932128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical shape modeling (SSM) is an enabling quantitative tool to study
anatomical shapes in various medical applications. However, directly using 3D
images in these applications still has a long way to go. Recent deep learning
methods have paved the way for reducing the substantial preprocessing steps to
construct SSMs directly from unsegmented images. Nevertheless, the performance
of these models is not up to the mark. Inspired by multiscale/multiresolution
learning, we propose a new training strategy, progressive DeepSSM, to train
image-to-shape deep learning models. The training is performed in multiple
scales, and each scale utilizes the output from the previous scale. This
strategy enables the model to learn coarse shape features in the first scales
and gradually learn detailed fine shape features in the later scales. We
leverage shape priors via segmentation-guided multi-task learning and employ
deep supervision loss to ensure learning at each scale. Experiments show the
superiority of models trained by the proposed strategy from both quantitative
and qualitative perspectives. This training methodology can be employed to
improve the stability and accuracy of any deep learning method for inferring
statistical representations of anatomies from medical images and can be adopted
by existing deep learning methods to improve model accuracy and training
stability.
Related papers
- Exploring Learngene via Stage-wise Weight Sharing for Initializing Variable-sized Models [40.21274215353816]
We introduce the Learngene framework, which learns one compact part termed as learngene from a large well-trained model.
We then expand these learngene layers containing stage information at their corresponding stage to initialize models of variable depths.
Experiments on ImageNet-1K demonstrate that SWS achieves consistent better performance compared to many models trained from scratch.
arXiv Detail & Related papers (2024-04-25T06:04:34Z) - Learned Image resizing with efficient training (LRET) facilitates
improved performance of large-scale digital histopathology image
classification models [0.0]
Histologic examination plays a crucial role in oncology research and diagnostics.
Current approaches to training deep convolutional neural networks (DCNN) result in suboptimal model performance.
We introduce a novel approach that addresses the main limitations of traditional histopathology classification model training.
arXiv Detail & Related papers (2024-01-19T23:45:47Z) - ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images [0.8192907805418583]
This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation.
Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
arXiv Detail & Related papers (2023-07-06T20:21:12Z) - Delving Deeper into Data Scaling in Masked Image Modeling [145.36501330782357]
We conduct an empirical study on the scaling capability of masked image modeling (MIM) methods for visual recognition.
Specifically, we utilize the web-collected Coyo-700M dataset.
Our goal is to investigate how the performance changes on downstream tasks when scaling with different sizes of data and models.
arXiv Detail & Related papers (2023-05-24T15:33:46Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - DINOv2: Learning Robust Visual Features without Supervision [75.42921276202522]
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
arXiv Detail & Related papers (2023-04-14T15:12:19Z) - Stochastic Planner-Actor-Critic for Unsupervised Deformable Image
Registration [33.72954116727303]
We present a novel reinforcement learning-based framework that performs step-wise registration of medical images with large deformations.
We evaluate our method on several 2D and 3D medical image datasets, some of which contain large deformations.
arXiv Detail & Related papers (2021-12-14T14:08:56Z) - DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models [4.608133071225539]
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images.
DeepSSM aims to provide a blueprint for deep learning-based image-to-shape models.
arXiv Detail & Related papers (2021-10-14T04:52:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.