Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning
- URL: http://arxiv.org/abs/2201.06358v1
- Date: Mon, 17 Jan 2022 11:44:10 GMT
- Title: Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning
- Authors: Yiwen Li, Yunguan Fu, Qianye Yang, Zhe Min, Wen Yan, Henkjan Huisman,
Dean Barratt, Victor Adrian Prisacariu, Yipeng Hu
- Abstract summary: This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
- Score: 13.567073992605797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to adapt medical image segmentation networks for a novel class
such as an unseen anatomical or pathological structure, when only a few
labelled examples of this class are available from local healthcare providers,
is sought-after. This potentially addresses two widely recognised limitations
in deploying modern deep learning models to clinical practice,
expertise-and-labour-intensive labelling and cross-institution generalisation.
This work presents the first 3D few-shot interclass segmentation network for
medical images, using a labelled multi-institution dataset from prostate cancer
patients with eight regions of interest. We propose an image alignment module
registering the predicted segmentation of both query and support data, in a
standard prototypical learning algorithm, to a reference atlas space. The
built-in registration mechanism can effectively utilise the prior knowledge of
consistent anatomy between subjects, regardless whether they are from the same
institution or not. Experimental results demonstrated that the proposed
registration-assisted prototypical learning significantly improved segmentation
accuracy (p-values<0.01) on query data from a holdout institution, with varying
availability of support data from multiple institutions. We also report the
additional benefits of the proposed 3D networks with 75% fewer parameters and
an arguably simpler implementation, compared with existing 2D few-shot
approaches that segment 2D slices of volumetric medical images.
Related papers
- Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained
Image Foundation Models [13.08275555017179]
We propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt.
We evaluate our model on two public datasets for colon and pancreas tumor segmentations.
arXiv Detail & Related papers (2023-10-30T16:49:03Z) - PCDAL: A Perturbation Consistency-Driven Active Learning Approach for
Medical Image Segmentation and Classification [12.560273908522714]
Supervised learning deeply relies on large-scale annotated data, which is expensive, time-consuming, and impractical to acquire in medical imaging applications.
Active Learning (AL) methods have been widely applied in natural image classification tasks to reduce annotation costs.
We propose an AL-based method that can be simultaneously applied to 2D medical image classification, segmentation, and 3D medical image segmentation tasks.
arXiv Detail & Related papers (2023-06-29T13:11:46Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Prototypical few-shot segmentation for cross-institution male pelvic
structures with spatial registration [24.089382725904304]
This work describes a fully 3D few-shot segmentation algorithm.
The trained networks can be effectively adapted to clinically interesting structures that are absent in training.
Experiments are presented in an application of segmenting eight anatomical structures important for interventional planning.
arXiv Detail & Related papers (2022-09-12T11:34:57Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Bidirectional RNN-based Few Shot Learning for 3D Medical Image
Segmentation [11.873435088539459]
We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
arXiv Detail & Related papers (2020-11-19T01:44:55Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.