ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images
- URL: http://arxiv.org/abs/2307.03273v3
- Date: Mon, 21 Aug 2023 22:39:42 GMT
- Title: ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images
- Authors: Mokshagna Sai Teja Karanam, Tushar Kataria, Krithika Iyer and Shireen
Elhabian
- Abstract summary: This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation.
Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
- Score: 0.8192907805418583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical shape models (SSM) have been well-established as an excellent
tool for identifying variations in the morphology of anatomy across the
underlying population. Shape models use consistent shape representation across
all the samples in a given cohort, which helps to compare shapes and identify
the variations that can detect pathologies and help in formulating treatment
plans. In medical imaging, computing these shape representations from CT/MRI
scans requires time-intensive preprocessing operations, including but not
limited to anatomy segmentation annotations, registration, and texture
denoising. Deep learning models have demonstrated exceptional capabilities in
learning shape representations directly from volumetric images, giving rise to
highly effective and efficient Image-to-SSM networks. Nevertheless, these
models are data-hungry and due to the limited availability of medical data,
deep learning models tend to overfit. Offline data augmentation techniques,
that use kernel density estimation based (KDE) methods for generating
shape-augmented samples, have successfully aided Image-to-SSM networks in
achieving comparable accuracy to traditional SSM methods. However, these
augmentation methods focus on shape augmentation, whereas deep learning models
exhibit image-based texture bias resulting in sub-optimal models. This paper
introduces a novel strategy for on-the-fly data augmentation for the
Image-to-SSM framework by leveraging data-dependent noise generation or texture
augmentation. The proposed framework is trained as an adversary to the
Image-to-SSM network, augmenting diverse and challenging noisy samples. Our
approach achieves improved accuracy by encouraging the model to focus on the
underlying geometry rather than relying solely on pixel values.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images [1.2179682412409507]
We propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data.
Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.
arXiv Detail & Related papers (2024-07-02T03:56:20Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models [4.972323953932128]
We propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models.
We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale.
Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives.
arXiv Detail & Related papers (2023-10-02T18:17:20Z) - Image2SSM: Reimagining Statistical Shape Models from Images with Radial
Basis Functions [4.422330219605964]
We propose Image2SSM, a novel deep-learning-based approach for statistical shape modeling.
Image2SSM learns a radial-basis-function (RBF)-based representation of shapes directly from images.
It can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes.
arXiv Detail & Related papers (2023-05-19T18:08:10Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models [4.608133071225539]
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images.
DeepSSM aims to provide a blueprint for deep learning-based image-to-shape models.
arXiv Detail & Related papers (2021-10-14T04:52:37Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.