CPNet: Cycle Prototype Network for Weakly-supervised 3D Renal
Compartments Segmentation on CT Images
- URL: http://arxiv.org/abs/2108.06669v1
- Date: Sun, 15 Aug 2021 06:54:38 GMT
- Title: CPNet: Cycle Prototype Network for Weakly-supervised 3D Renal
Compartments Segmentation on CT Images
- Authors: Song Wang, Yuting He, Youyong Kong, Xiaomei Zhu, Shaobo Zhang, Pengfei
Shao, Jean-Louis Dillenseger, Jean-Louis Coatrieux, Shuo Li, Guanyu Yang
- Abstract summary: Renal compartment segmentation on CT images targets on extracting the 3D structure of renal compartments from abdominal CTA images.
We propose a novel weakly supervised learning framework, Cycle Prototype Network, for 3D renal compartment segmentation.
Our model achieves 79.1% and 78.7% with only four labeled images, achieving a significant improvement by about 20% than typical prototype model PANet.
- Score: 14.756502627336305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Renal compartment segmentation on CT images targets on extracting the 3D
structure of renal compartments from abdominal CTA images and is of great
significance to the diagnosis and treatment for kidney diseases. However, due
to the unclear compartment boundary, thin compartment structure and large
anatomy variation of 3D kidney CT images, deep-learning based renal compartment
segmentation is a challenging task. We propose a novel weakly supervised
learning framework, Cycle Prototype Network, for 3D renal compartment
segmentation. It has three innovations: 1) A Cycle Prototype Learning (CPL) is
proposed to learn consistency for generalization. It learns from pseudo labels
through the forward process and learns consistency regularization through the
reverse process. The two processes make the model robust to noise and
label-efficient. 2) We propose a Bayes Weakly Supervised Module (BWSM) based on
cross-period prior knowledge. It learns prior knowledge from cross-period
unlabeled data and perform error correction automatically, thus generates
accurate pseudo labels. 3) We present a Fine Decoding Feature Extractor (FDFE)
for fine-grained feature extraction. It combines global morphology information
and local detail information to obtain feature maps with sharp detail, so the
model will achieve fine segmentation on thin structures. Our model achieves
Dice of 79.1% and 78.7% with only four labeled images, achieving a significant
improvement by about 20% than typical prototype model PANet.
Related papers
- μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Extremely weakly-supervised blood vessel segmentation with
physiologically based synthesis and domain adaptation [7.107236806113722]
Accurate analysis and modeling of renal functions require a precise segmentation of the renal blood vessels.
Deep-learning-based methods have shown state-of-the-art performance in automatic blood vessel segmentations.
We train a generative model on unlabeled scans and simulate synthetic renal vascular trees physiologically.
We demonstrate that the model can directly segment blood vessels on real scans and validate our method on both 3D micro-CT scans of rat kidneys and a proof-of-concept experiment on 2D retinal images.
arXiv Detail & Related papers (2023-05-26T16:01:49Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Bidirectional RNN-based Few Shot Learning for 3D Medical Image
Segmentation [11.873435088539459]
We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
arXiv Detail & Related papers (2020-11-19T01:44:55Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Map3D: Registration Based Multi-Object Tracking on 3D Serial Whole Slide
Images [10.519063258650508]
We propose a novel Multi-object Association for Pathology in 3D (Map3D) method for automatically identifying and associating large-scale cross-sections of 3D objects.
Our proposed method Map3D achieved MOTA= 44.6, which is 12.1% higher than the non deep learning benchmarks.
arXiv Detail & Related papers (2020-06-10T19:31:02Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.