NUMSnet: Nested-U Multi-class Segmentation network for 3D Medical Image
Stacks
- URL: http://arxiv.org/abs/2304.02713v1
- Date: Wed, 5 Apr 2023 19:16:29 GMT
- Title: NUMSnet: Nested-U Multi-class Segmentation network for 3D Medical Image
Stacks
- Authors: Sohini Roychowdhury
- Abstract summary: NUMSnet is a novel variant of the Unet model that transmits pixel neighborhood features across scans through nested layers.
We analyze the semantic segmentation performance of the NUMSnet model in comparison with several Unet model variants.
The proposed model can standardize multi-class semantic segmentation on a variety of volumetric image stacks with minimal training dataset.
- Score: 1.2335698325757494
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Semantic segmentation for medical 3D image stacks enables accurate volumetric
reconstructions, computer-aided diagnostics and follow up treatment planning.
In this work, we present a novel variant of the Unet model called the NUMSnet
that transmits pixel neighborhood features across scans through nested layers
to achieve accurate multi-class semantic segmentations with minimal training
data. We analyze the semantic segmentation performance of the NUMSnet model in
comparison with several Unet model variants to segment 3-7 regions of interest
using only 10% of images for training per Lung-CT and Heart-CT volumetric image
stacks. The proposed NUMSnet model achieves up to 20% improvement in
segmentation recall with 4-9% improvement in Dice scores for Lung-CT stacks and
2.5-10% improvement in Dice scores for Heart-CT stacks when compared to the
Unet++ model. The NUMSnet model needs to be trained by ordered images around
the central scan of each volumetric stack. Propagation of image feature
information from the 6 nested layers of the Unet++ model are found to have
better computation and segmentation performances than propagation of all
up-sampling layers in a Unet++ model. The NUMSnet model achieves comparable
segmentation performances to existing works, while being trained on as low as
5\% of the training images. Also, transfer learning allows faster convergence
of the NUMSnet model for multi-class semantic segmentation from pathology in
Lung-CT images to cardiac segmentations in Heart-CT stacks. Thus, the proposed
model can standardize multi-class semantic segmentation on a variety of
volumetric image stacks with minimal training dataset. This can significantly
reduce the cost, time and inter-observer variabilities associated with
computer-aided detections and treatment.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Continual Segment: Towards a Single, Unified and Accessible Continual
Segmentation Model of 143 Whole-body Organs in CT Scans [31.388497540849297]
We propose a new architectural CSS learning framework to learn a single deep segmentation model for segmenting a total of 143 whole-body organs.
We trained and validated on 3D CT scans of 2500+ patients from four datasets, our single network can segment total 143 whole-body organs with very high accuracy.
arXiv Detail & Related papers (2023-02-01T00:49:21Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - QU-net++: Image Quality Detection Framework for Segmentation of 3D
Medical Image Stacks [0.9594432031144714]
We propose an automated two-step method that evaluates the quality of medical images from 3D image stacks using a U-net++ model.
Images detected can then be used to further fine tune the U-net++ model for semantic segmentation.
arXiv Detail & Related papers (2021-10-27T05:28:02Z) - A Multi-Task Cross-Task Learning Architecture for Ad-hoc Uncertainty
Estimation in 3D Cardiac MRI Image Segmentation [0.0]
We present a Multi-task Cross-task learning consistency approach to enforce the correlation between the pixel-level (segmentation) and the geometric-level (distance map) tasks.
Our study further showcases the potential of our model to flag low-quality segmentation from a given model.
arXiv Detail & Related papers (2021-09-16T03:53:24Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Weakly Supervised 3D Classification of Chest CT using Aggregated
Multi-Resolution Deep Segmentation Features [5.938730586521215]
Weakly supervised disease classification of CT imaging suffers from poor localization owing to case-level annotations.
We propose a medical classifier that leverages semantic structural concepts learned via multi-resolution segmentation feature maps.
arXiv Detail & Related papers (2020-10-31T00:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.