Prototypical few-shot segmentation for cross-institution male pelvic
structures with spatial registration
- URL: http://arxiv.org/abs/2209.05160v3
- Date: Fri, 25 Aug 2023 13:17:46 GMT
- Title: Prototypical few-shot segmentation for cross-institution male pelvic
structures with spatial registration
- Authors: Yiwen Li, Yunguan Fu, Iani Gayo, Qianye Yang, Zhe Min, Shaheer Saeed,
Wen Yan, Yipei Wang, J. Alison Noble, Mark Emberton, Matthew J. Clarkson,
Henkjan Huisman, Dean Barratt, Victor Adrian Prisacariu, Yipeng Hu
- Abstract summary: This work describes a fully 3D few-shot segmentation algorithm.
The trained networks can be effectively adapted to clinically interesting structures that are absent in training.
Experiments are presented in an application of segmenting eight anatomical structures important for interventional planning.
- Score: 24.089382725904304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prowess that makes few-shot learning desirable in medical image analysis
is the efficient use of the support image data, which are labelled to classify
or segment new classes, a task that otherwise requires substantially more
training images and expert annotations. This work describes a fully 3D
prototypical few-shot segmentation algorithm, such that the trained networks
can be effectively adapted to clinically interesting structures that are absent
in training, using only a few labelled images from a different institute.
First, to compensate for the widely recognised spatial variability between
institutions in episodic adaptation of novel classes, a novel spatial
registration mechanism is integrated into prototypical learning, consisting of
a segmentation head and an spatial alignment module. Second, to assist the
training with observed imperfect alignment, support mask conditioning module is
proposed to further utilise the annotation available from the support images.
Extensive experiments are presented in an application of segmenting eight
anatomical structures important for interventional planning, using a data set
of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results
demonstrate the efficacy in each of the 3D formulation, the spatial
registration, and the support mask conditioning, all of which made positive
contributions independently or collectively. Compared with the previously
proposed 2D alternatives, the few-shot segmentation performance was improved
with statistical significance, regardless whether the support data come from
the same or different institutes.
Related papers
- One registration is worth two segmentations [12.163299991979574]
The goal of image registration is to establish spatial correspondence between two or more images.
We propose an alternative but more intuitive correspondence representation: a set of corresponding regions-of-interest (ROI) pairs.
We experimentally show that the proposed SAMReg is capable of segmenting and matching multiple ROI pairs.
arXiv Detail & Related papers (2024-05-17T16:14:32Z) - OneSeg: Self-learning and One-shot Learning based Single-slice
Annotation for 3D Medical Image Segmentation [36.50258132379276]
We propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image.
Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation.
Our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods.
arXiv Detail & Related papers (2023-09-24T15:35:58Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning [13.567073992605797]
This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
arXiv Detail & Related papers (2022-01-17T11:44:10Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Bidirectional RNN-based Few Shot Learning for 3D Medical Image
Segmentation [11.873435088539459]
We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
arXiv Detail & Related papers (2020-11-19T01:44:55Z) - JSSR: A Joint Synthesis, Segmentation, and Registration System for 3D
Multi-Modal Image Alignment of Large-scale Pathological CT Scans [27.180136688977512]
We propose a novel multi-task learning system, JSSR, based on an end-to-end 3D convolutional neural network.
The system is optimized to satisfy the implicit constraints between different tasks in an unsupervised manner.
It consistently outperforms conventional state-of-the-art multi-modal registration methods.
arXiv Detail & Related papers (2020-05-25T16:30:02Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.