ScribblePolyp: Scribble-Supervised Polyp Segmentation through Dual
Consistency Alignment
- URL: http://arxiv.org/abs/2311.05122v1
- Date: Thu, 9 Nov 2023 03:23:25 GMT
- Title: ScribblePolyp: Scribble-Supervised Polyp Segmentation through Dual
Consistency Alignment
- Authors: Zixun Zhang, Yuncheng Jiang, Jun Wei, Hannah Cui, Zhen Li
- Abstract summary: We introduce ScribblePolyp, a novel scribble-supervised polyp segmentation framework.
Unlike fully-supervised models, ScribblePolyp only requires the annotation of two lines (scribble labels) for each image.
Despite the coarse nature of scribble labels, which leave a substantial portion of pixels unlabeled, we propose a two-branch consistency alignment approach.
- Score: 9.488599217305625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic polyp segmentation models play a pivotal role in the clinical
diagnosis of gastrointestinal diseases. In previous studies, most methods
relied on fully supervised approaches, necessitating pixel-level annotations
for model training. However, the creation of pixel-level annotations is both
expensive and time-consuming, impeding the development of model generalization.
In response to this challenge, we introduce ScribblePolyp, a novel
scribble-supervised polyp segmentation framework. Unlike fully-supervised
models, ScribblePolyp only requires the annotation of two lines (scribble
labels) for each image, significantly reducing the labeling cost. Despite the
coarse nature of scribble labels, which leave a substantial portion of pixels
unlabeled, we propose a two-branch consistency alignment approach to provide
supervision for these unlabeled pixels. The first branch employs transformation
consistency alignment to narrow the gap between predictions under different
transformations of the same input image. The second branch leverages affinity
propagation to refine predictions into a soft version, extending additional
supervision to unlabeled pixels. In summary, ScribblePolyp is an efficient
model that does not rely on teacher models or moving average pseudo labels
during training. Extensive experiments on the SUN-SEG dataset underscore the
effectiveness of ScribblePolyp, achieving a Dice score of 0.8155, with the
potential for a 1.8% improvement in the Dice score through a straightforward
self-training strategy.
Related papers
- Semi-Supervised Coupled Thin-Plate Spline Model for Rotation Correction and Beyond [84.56978780892783]
We propose CoupledTPS, which iteratively couples multiple TPS with limited control points into a more flexible and powerful transformation.
In light of the laborious annotation cost, we develop a semi-supervised learning scheme to improve warping quality by exploiting unlabeled data.
Experiments demonstrate the superiority and universality of CoupledTPS over the existing state-of-the-art solutions for rotation correction.
arXiv Detail & Related papers (2024-01-24T13:03:28Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - BoxPolyp:Boost Generalized Polyp Segmentation Using Extra Coarse
Bounding Box Annotations [79.17754846553866]
We propose a boosted BoxPolyp model to make full use of both accurate mask and extra coarse box annotations.
In practice, box annotations are applied to alleviate the over-fitting issue of previous polyp segmentation models.
Our proposed model outperforms previous state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-12-07T07:45:50Z) - Towards Automated Polyp Segmentation Using Weakly- and Semi-Supervised
Learning and Deformable Transformers [8.01814397869811]
Polyp segmentation is a crucial step towards computer-aided diagnosis of colorectal cancer.
Most of the polyp segmentation methods require pixel-wise annotated datasets.
We propose a novel framework that can be trained using only weakly annotated images along with exploiting unlabeled images.
arXiv Detail & Related papers (2022-11-21T20:44:12Z) - Pointly-Supervised Panoptic Segmentation [106.68888377104886]
We propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation.
Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision.
We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them.
arXiv Detail & Related papers (2022-10-25T12:03:51Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive
Learning [28.498782661888775]
We formulate weakly supervised segmentation as a semi-supervised metric learning problem.
We propose 4 types of contrastive relationships between pixels and segments in the feature space.
We deliver a universal weakly supervised segmenter with significant gains on Pascal VOC and DensePose.
arXiv Detail & Related papers (2021-05-03T15:49:01Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Efficient Full Image Interactive Segmentation by Leveraging Within-image
Appearance Similarity [39.17599924322882]
We propose a new approach to interactive full-image semantic segmentation.
We leverage a key observation: propagation from labeled to unlabeled pixels does not necessarily require class-specific knowledge.
We build on this observation and propose an approach capable of jointly propagating pixel labels from multiple classes.
arXiv Detail & Related papers (2020-07-16T08:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.