QU-net++: Image Quality Detection Framework for Segmentation of 3D
Medical Image Stacks
- URL: http://arxiv.org/abs/2110.14181v1
- Date: Wed, 27 Oct 2021 05:28:02 GMT
- Title: QU-net++: Image Quality Detection Framework for Segmentation of 3D
Medical Image Stacks
- Authors: Sohini Roychowdhury
- Abstract summary: We propose an automated two-step method that evaluates the quality of medical images from 3D image stacks using a U-net++ model.
Images detected can then be used to further fine tune the U-net++ model for semantic segmentation.
- Score: 0.9594432031144714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated segmentation of pathological regions of interest has been shown to
aid prognosis and follow up treatment. However, accurate pathological
segmentations require high quality of annotated data that can be both cost and
time intensive to generate. In this work, we propose an automated two-step
method that evaluates the quality of medical images from 3D image stacks using
a U-net++ model, such that images that can aid further training of the U-net++
model can be detected based on the disagreement in segmentations produced from
the final two layers. Images thus detected can then be used to further fine
tune the U-net++ model for semantic segmentation. The proposed QU-net++ model
isolates around 10\% of images per 3D stack and can scale across imaging
modalities to segment cysts in OCT images and ground glass opacity in Lung CT
images with Dice cores in the range 0.56-0.72. Thus, the proposed method can be
applied for multi-modal binary segmentation of pathology.
Related papers
- μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained
Image Foundation Models [13.08275555017179]
We propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt.
We evaluate our model on two public datasets for colon and pancreas tumor segmentations.
arXiv Detail & Related papers (2023-10-30T16:49:03Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - AttResDU-Net: Medical Image Segmentation Using Attention-based Residual
Double U-Net [0.0]
This paper proposes an attention-based residual Double U-Net architecture (AttResDU-Net) that improves on the existing medical image segmentation networks.
We conducted experiments on three datasets: CVC Clinic-DB, ISIC 2018, and the 2018 Data Science Bowl datasets and achieved Dice Coefficient scores of 94.35%, 91.68%, and 92.45% respectively.
arXiv Detail & Related papers (2023-06-25T14:28:08Z) - NUMSnet: Nested-U Multi-class Segmentation network for 3D Medical Image
Stacks [1.2335698325757494]
NUMSnet is a novel variant of the Unet model that transmits pixel neighborhood features across scans through nested layers.
We analyze the semantic segmentation performance of the NUMSnet model in comparison with several Unet model variants.
The proposed model can standardize multi-class semantic segmentation on a variety of volumetric image stacks with minimal training dataset.
arXiv Detail & Related papers (2023-04-05T19:16:29Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.