LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence
for One-shot Medical Image Segmentation
- URL: http://arxiv.org/abs/2003.07072v3
- Date: Fri, 20 Mar 2020 04:50:48 GMT
- Title: LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence
for One-shot Medical Image Segmentation
- Authors: Shuxin Wang, Shilei Cao, Dong Wei, Renzhen Wang, Kai Ma, Liansheng
Wang, Deyu Meng, and Yefeng Zheng
- Abstract summary: We introduce a one-shot segmentation method to alleviate the burden of manual annotation for medical images.
The main idea is to treat one-shot segmentation as a classical atlas-based segmentation problem, where voxel-wise correspondence from the atlas to the unlabelled data is learned.
We demonstrate the superiority of our method over both deep learning-based one-shot segmentation methods and a classical multi-atlas segmentation method via thorough experiments.
- Score: 52.2074595581139
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a one-shot segmentation method to alleviate the burden of manual
annotation for medical images. The main idea is to treat one-shot segmentation
as a classical atlas-based segmentation problem, where voxel-wise
correspondence from the atlas to the unlabelled data is learned. Subsequently,
segmentation label of the atlas can be transferred to the unlabelled data with
the learned correspondence. However, since ground truth correspondence between
images is usually unavailable, the learning system must be well-supervised to
avoid mode collapse and convergence failure. To overcome this difficulty, we
resort to the forward-backward consistency, which is widely used in
correspondence problems, and additionally learn the backward correspondences
from the warped atlases back to the original atlas. This cycle-correspondence
learning design enables a variety of extra, cycle-consistency-based supervision
signals to make the training process stable, while also boost the performance.
We demonstrate the superiority of our method over both deep learning-based
one-shot segmentation methods and a classical multi-atlas segmentation method
via thorough experiments.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Semantic Contrastive Bootstrapping for Single-positive Multi-label
Recognition [36.3636416735057]
We present a semantic contrastive bootstrapping (Scob) approach to gradually recover the cross-object relationships.
We then propose a recurrent semantic masked transformer to extract iconic object-level representations.
Extensive experimental results demonstrate that the proposed joint learning framework surpasses the state-of-the-art models.
arXiv Detail & Related papers (2023-07-15T01:59:53Z) - Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic
Segmentation [40.09476732999614]
Mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
The use of a background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels.
This paper proposes novel class enhancement losses to bypass the use of the background embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores.
arXiv Detail & Related papers (2023-01-18T06:55:02Z) - Robust One-shot Segmentation of Brain Tissues via Image-aligned Style
Transformation [13.430851964063534]
We propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for one-shot segmentation of brain tissues.
Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%.
arXiv Detail & Related papers (2022-11-26T09:14:01Z) - POPCORN: Progressive Pseudo-labeling with Consistency Regularization and
Neighboring [3.4253416336476246]
Semi-supervised learning (SSL) uses unlabeled data to compensate for the scarcity of images and the lack of method generalization to unseen domains.
We propose POPCORN, a novel method combining consistency regularization and pseudo-labeling designed for image segmentation.
arXiv Detail & Related papers (2021-09-13T23:36:36Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.