Omni-Seg: A Single Dynamic Network for Multi-label Renal Pathology Image
Segmentation using Partially Labeled Data
- URL: http://arxiv.org/abs/2112.12665v1
- Date: Thu, 23 Dec 2021 16:02:03 GMT
- Title: Omni-Seg: A Single Dynamic Network for Multi-label Renal Pathology Image
Segmentation using Partially Labeled Data
- Authors: Ruining Deng, Quan Liu, Can Cui, Zuhayr Asad, Haichun Yang, Yuankai
Huo
- Abstract summary: In non-cancer pathology, the learning algorithms can be asked to examine more comprehensive tissue types simultaneously.
The prior arts needed to train multiple segmentation networks in order to match the domain-specific knowledge.
By learning from 150,000 patch-wise pathological images, the proposed Omni-Seg network achieved superior segmentation accuracy and less resource consumption.
- Score: 6.528287373027917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer-assisted quantitative analysis on Giga-pixel pathology images has
provided a new avenue in precision medicine. The innovations have been largely
focused on cancer pathology (i.e., tumor segmentation and characterization). In
non-cancer pathology, the learning algorithms can be asked to examine more
comprehensive tissue types simultaneously, as a multi-label setting. The prior
arts typically needed to train multiple segmentation networks in order to match
the domain-specific knowledge for heterogeneous tissue types (e.g., glomerular
tuft, glomerular unit, proximal tubular, distal tubular, peritubular
capillaries, and arteries). In this paper, we propose a dynamic single
segmentation network (Omni-Seg) that learns to segment multiple tissue types
using partially labeled images (i.e., only one tissue type is labeled for each
training image) for renal pathology. By learning from ~150,000 patch-wise
pathological images from six tissue types, the proposed Omni-Seg network
achieved superior segmentation accuracy and less resource consumption when
compared to the previous the multiple-network and multi-head design. In the
testing stage, the proposed method obtains "completely labeled" tissue
segmentation results using only "partially labeled" training images. The source
code is available at https://github.com/ddrrnn123/Omni-Seg.
Related papers
- Multi-scale Multi-site Renal Microvascular Structures Segmentation for
Whole Slide Imaging in Renal Pathology [4.743463035587953]
We present Omni-Seg, a novel single dynamic network method that capitalizes on multi-site, multi-scale training data.
We train a singular deep network using images from two datasets, HuBMAP and NEPTUNE.
Our proposed method provides renal pathologists with a powerful computational tool for the quantitative analysis of renal microvascular structures.
arXiv Detail & Related papers (2023-08-10T16:26:03Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Omni-Seg+: A Scale-aware Dynamic Network for Pathological Image
Segmentation [13.182646724406291]
The cross-sectional areas of glomeruli can be 64 times larger than that of peritubular capillaries.
We propose the Omni-Seg+ network, a scale-aware dynamic neural network that achieves multi-object (six tissue types) and multi-scale (5X to 40X scale) pathological image segmentation.
arXiv Detail & Related papers (2022-06-27T21:09:55Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Automatic Semantic Segmentation of the Lumbar Spine. Clinical
Applicability in a Multi-parametric and Multi-centre MRI study [0.0]
This document describes the topologies and analyses the results of the neural network designs that obtained the most accurate segmentations.
Several of the proposed designs outperform the standard U-Net used as baseline, especially when used in ensembles where the output of multiple neural networks is combined according to different strategies.
arXiv Detail & Related papers (2021-11-16T17:33:05Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - W-Net: Dense Semantic Segmentation of Subcutaneous Tissue in Ultrasound
Images by Expanding U-Net to Incorporate Ultrasound RF Waveform Data [2.9023633922848586]
We present W-Net, a novel Convolution Neural Network (CNN) framework that employs raw ultrasound waveforms from each A-scan.
We seek to label every pixel in the image, without the use of a background class.
We present analysis as to why the Muscle fascia and Fat fascia/stroma are the most difficult tissues to label.
arXiv Detail & Related papers (2020-08-27T23:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.