Flip Learning: Erase to Segment
- URL: http://arxiv.org/abs/2108.00752v1
- Date: Mon, 2 Aug 2021 09:56:10 GMT
- Title: Flip Learning: Erase to Segment
- Authors: Yuhao Huang, Xin Yang, Yuxin Zou, Chaoyu Chen, Jian Wang, Haoran Dou,
Nishant Ravikumar, Alejandro F Frangi, Jianqiao Zhou, Dong Ni
- Abstract summary: Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
- Score: 65.84901344260277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nodule segmentation from breast ultrasound images is challenging yet
essential for the diagnosis. Weakly-supervised segmentation (WSS) can help
reduce time-consuming and cumbersome manual annotation. Unlike existing
weakly-supervised approaches, in this study, we propose a novel and general WSS
framework called Flip Learning, which only needs the box annotation.
Specifically, the target in the label box will be erased gradually to flip the
classification tag, and the erased region will be considered as the
segmentation result finally. Our contribution is three-fold. First, our
proposed approach erases on superpixel level using a Multi-agent Reinforcement
Learning framework to exploit the prior boundary knowledge and accelerate the
learning process. Second, we design two rewards: classification score and
intensity distribution reward, to avoid under- and over-segmentation,
respectively. Third, we adopt a coarse-to-fine learning strategy to reduce the
residual errors and improve the segmentation performance. Extensively validated
on a large dataset, our proposed approach achieves competitive performance and
shows great potential to narrow the gap between fully-supervised and
weakly-supervised learning.
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - SegMatch: A semi-supervised learning method for surgical instrument
segmentation [10.223709180135419]
We propose SegMatch, a semi supervised learning method to reduce the need for expensive annotation for laparoscopic and robotic surgical images.
SegMatch builds on FixMatch, a widespread semi supervised classification pipeline combining consistency regularization and pseudo labelling.
Our results demonstrate that adding unlabelled data for training purposes allows us to surpass the performance of fully supervised approaches.
arXiv Detail & Related papers (2023-08-09T21:30:18Z) - Scribble-supervised Cell Segmentation Using Multiscale Contrastive
Regularization [9.849498498869258]
Scribble2Label (S2L) demonstrated that using only a handful of scribbles with self-supervised learning can generate accurate segmentation results without full annotation.
In this work, we employ a novel multiscale contrastive regularization term for S2L.
The main idea is to extract features from intermediate layers of the neural network for contrastive loss so that structures at various scales can be effectively separated.
arXiv Detail & Related papers (2023-06-25T06:00:33Z) - Few-Shot Point Cloud Semantic Segmentation via Contrastive
Self-Supervision and Multi-Resolution Attention [6.350163959194903]
We propose a contrastive self-supervision framework for few-shot learning pretrain.
Specifically, we implement a novel contrastive learning approach with a learnable augmentor for a 3D point cloud.
We develop a multi-resolution attention module using both the nearest and farthest points to extract the local and global point information more effectively.
arXiv Detail & Related papers (2023-02-21T07:59:31Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Hypernet-Ensemble Learning of Segmentation Probability for Medical Image
Segmentation with Ambiguous Labels [8.841870931360585]
Deep Learning approaches are notoriously overconfident about their prediction with highly polarized label probability.
This is often not desirable for many applications with the inherent label ambiguity even in human annotations.
We propose novel methods to improve the segmentation probability estimation without sacrificing performance in a real-world scenario.
arXiv Detail & Related papers (2021-12-13T14:24:53Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.