Prototypical few-shot segmentation for cross-institution male pelvic
structures with spatial registration
- URL: http://arxiv.org/abs/2209.05160v3
- Date: Fri, 25 Aug 2023 13:17:46 GMT
- Title: Prototypical few-shot segmentation for cross-institution male pelvic
structures with spatial registration
- Authors: Yiwen Li, Yunguan Fu, Iani Gayo, Qianye Yang, Zhe Min, Shaheer Saeed,
Wen Yan, Yipei Wang, J. Alison Noble, Mark Emberton, Matthew J. Clarkson,
Henkjan Huisman, Dean Barratt, Victor Adrian Prisacariu, Yipeng Hu
- Abstract summary: This work describes a fully 3D few-shot segmentation algorithm.
The trained networks can be effectively adapted to clinically interesting structures that are absent in training.
Experiments are presented in an application of segmenting eight anatomical structures important for interventional planning.
- Score: 24.089382725904304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prowess that makes few-shot learning desirable in medical image analysis
is the efficient use of the support image data, which are labelled to classify
or segment new classes, a task that otherwise requires substantially more
training images and expert annotations. This work describes a fully 3D
prototypical few-shot segmentation algorithm, such that the trained networks
can be effectively adapted to clinically interesting structures that are absent
in training, using only a few labelled images from a different institute.
First, to compensate for the widely recognised spatial variability between
institutions in episodic adaptation of novel classes, a novel spatial
registration mechanism is integrated into prototypical learning, consisting of
a segmentation head and an spatial alignment module. Second, to assist the
training with observed imperfect alignment, support mask conditioning module is
proposed to further utilise the annotation available from the support images.
Extensive experiments are presented in an application of segmenting eight
anatomical structures important for interventional planning, using a data set
of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results
demonstrate the efficacy in each of the 3D formulation, the spatial
registration, and the support mask conditioning, all of which made positive
contributions independently or collectively. Compared with the previously
proposed 2D alternatives, the few-shot segmentation performance was improved
with statistical significance, regardless whether the support data come from
the same or different institutes.
Related papers
- Medical Image Registration Meets Vision Foundation Model: Prototype Learning and Contour Awareness [11.671950446844356]
Existing deformable registration methods rely solely on intensity-based similarity metrics, lacking explicit anatomical knowledge.
We propose a novel SAM-assisted registration framework incorporating prototype learning and contour awareness.
Our framework significantly outperforms existing methods across multiple datasets.
arXiv Detail & Related papers (2025-02-17T04:54:47Z) - Medical Semantic Segmentation with Diffusion Pretrain [1.9415817267757087]
Recent advances in deep learning have shown that learning robust feature representations is critical for the success of many computer vision tasks.
We propose a novel pretraining strategy using diffusion models with anatomical guidance, tailored to the intricacies of 3D medical image data.
We employ an additional model that predicts 3D universal body-part coordinates, providing guidance during the diffusion process.
arXiv Detail & Related papers (2025-01-31T16:25:49Z) - OneSeg: Self-learning and One-shot Learning based Single-slice
Annotation for 3D Medical Image Segmentation [36.50258132379276]
We propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image.
Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation.
Our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods.
arXiv Detail & Related papers (2023-09-24T15:35:58Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning [13.567073992605797]
This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
arXiv Detail & Related papers (2022-01-17T11:44:10Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Bidirectional RNN-based Few Shot Learning for 3D Medical Image
Segmentation [11.873435088539459]
We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
arXiv Detail & Related papers (2020-11-19T01:44:55Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.