Universal Medical Image Segmentation using 3D Fabric Image
Representation Encoding Networks
- URL: http://arxiv.org/abs/2006.15578v3
- Date: Wed, 5 Oct 2022 04:24:49 GMT
- Title: Universal Medical Image Segmentation using 3D Fabric Image
Representation Encoding Networks
- Authors: Siyu Liu, Wei Dai, Craig Engstrom, Jurgen Fripp, Stuart Crozier, Jason
A. Dowling and Shekhar S. Chandra
- Abstract summary: This work proposes one such network, Fabric Image Representation.
Network (FIRENet), for simultaneous 3D multi-dataset segmentation.
In this study, FIRENet was first applied to 3D universal bone segmentation involving multiple datasets of the human knee, shoulder and hip joints.
- Score: 8.691611603448152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data scarcity is a common issue for deep learning applied to medical image
segmentation. One way to address this problem is to combine multiple datasets
into a large training set and train a unified network that simultaneously
learns from these datasets. This work proposes one such network, Fabric Image
Representation Encoding Network (FIRENet), for simultaneous 3D multi-dataset
segmentation. As medical image datasets can be extremely diverse in size and
voxel spacing, FIRENet uses a 3D fabric latent module, which automatically
encapsulates many multi-scale sub-architectures. An optimal combination of
these sub-architectures is implicitly learnt to enhance the performance across
many datasets. To further promote diverse-scale 3D feature extraction, a 3D
extension of atrous spatial pyramid pooling is used within each fabric node to
provide a finer coverage of rich-scale image features. In this study, FIRENet
was first applied to 3D universal bone segmentation involving multiple
musculoskeletal datasets of the human knee, shoulder and hip joints. FIRENet
exhibited excellent universal bone segmentation performance across all the
different joint datasets. When transfer learning is used, FIRENet exhibited
both excellent single dataset performance during pre-training (on a prostate
dataset) as well as significantly improved universal bone segmentation
performance. In a following experiment which involves the simultaneous
segmentation of the 10 Medical Segmentation Decathlon (MSD) challenge datasets.
FIRENet produced good multi-dataset segmentation results and demonstrated
excellent inter-dataset adaptability despite highly diverse image sizes and
features. Across these experiments, FIRENet's versatile design streamlined
multi-dataset segmentation into one unified network. Whereas traditionally,
similar tasks would often require multiple separately trained networks.
Related papers
- Few-Shot 3D Volumetric Segmentation with Multi-Surrogate Fusion [31.736235596070937]
We present MSFSeg, a novel few-shot 3D segmentation framework with a lightweight multi-surrogate fusion (MSF)
MSFSeg is able to automatically segment unseen 3D objects/organs (during training) provided with one or a few annotated 2D slices or 3D sequence segments.
Our proposed MSF module mines comprehensive and diversified correlations between unlabeled and the few labeled slices/sequences through multiple designated surrogates.
arXiv Detail & Related papers (2024-08-26T17:15:37Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Universal Segmentation of 33 Anatomies [19.194539991903593]
We present an approach for learning a single model that universally segments 33 anatomical structures.
We learn such a model from a union of multiple datasets, with each dataset containing the images that are partially labeled.
We evaluate our model on multiple open-source datasets, proving that our model has a good generalization performance.
arXiv Detail & Related papers (2022-03-04T02:29:54Z) - Shape-consistent Generative Adversarial Networks for multi-modal Medical
segmentation maps [10.781866671930857]
We present a segmentation network using synthesised cardiac volumes for extremely limited datasets.
Our solution is based on a 3D cross-modality generative adversarial network to share information between modalities.
We show that improved segmentation can be achieved on small datasets when using spatial augmentations.
arXiv Detail & Related papers (2022-01-24T13:57:31Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - MixSearch: Searching for Domain Generalized Medical Image Segmentation
Architectures [37.232192775864576]
We propose a novel approach to mix small-scale datasets from multiple domains and segmentation tasks to produce a large-scale dataset.
A novel encoder-decoder structure is designed to search for a generalized segmentation network in both cell-level and network-level.
The network produced by the proposed MixSearch framework achieves state-of-the-art results compared with advanced encoder-decoder networks.
arXiv Detail & Related papers (2021-02-26T02:55:28Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z) - Multi-Domain Image Completion for Random Missing Input Data [17.53581223279953]
Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities.
Due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources.
We propose a general approach to complete the random missing domain(s) data in real applications.
arXiv Detail & Related papers (2020-07-10T16:38:48Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.