BoxPolyp:Boost Generalized Polyp Segmentation Using Extra Coarse
Bounding Box Annotations
- URL: http://arxiv.org/abs/2212.03498v1
- Date: Wed, 7 Dec 2022 07:45:50 GMT
- Title: BoxPolyp:Boost Generalized Polyp Segmentation Using Extra Coarse
Bounding Box Annotations
- Authors: Jun Wei, Yiwen Hu, Guanbin Li, Shuguang Cui, S Kevin Zhou, Zhen Li
- Abstract summary: We propose a boosted BoxPolyp model to make full use of both accurate mask and extra coarse box annotations.
In practice, box annotations are applied to alleviate the over-fitting issue of previous polyp segmentation models.
Our proposed model outperforms previous state-of-the-art methods by a large margin.
- Score: 79.17754846553866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate polyp segmentation is of great importance for colorectal cancer
diagnosis and treatment. However, due to the high cost of producing accurate
mask annotations, existing polyp segmentation methods suffer from severe data
shortage and impaired model generalization. Reversely, coarse polyp bounding
box annotations are more accessible. Thus, in this paper, we propose a boosted
BoxPolyp model to make full use of both accurate mask and extra coarse box
annotations. In practice, box annotations are applied to alleviate the
over-fitting issue of previous polyp segmentation models, which generate
fine-grained polyp area through the iterative boosted segmentation model. To
achieve this goal, a fusion filter sampling (FFS) module is firstly proposed to
generate pixel-wise pseudo labels from box annotations with less noise, leading
to significant performance improvements. Besides, considering the appearance
consistency of the same polyp, an image consistency (IC) loss is designed. Such
IC loss explicitly narrows the distance between features extracted by two
different networks, which improves the robustness of the model. Note that our
BoxPolyp is a plug-and-play model, which can be merged into any appealing
backbone. Quantitative and qualitative experimental results on five challenging
benchmarks confirm that our proposed model outperforms previous
state-of-the-art methods by a large margin.
Related papers
- EPPS: Advanced Polyp Segmentation via Edge Information Injection and Selective Feature Decoupling [5.453850739960517]
We propose a novel model named Edge-Prioritized Polyp (EPPS)
Specifically, we incorporate an Edge Mapping Engine (EME) aimed at accurately extracting the edges of polyps.
We also introduce a component called Selective Feature Decoupler (SFD) to suppress the influence of noise and extraneous features on the model.
arXiv Detail & Related papers (2024-05-20T07:41:04Z) - ECC-PolypDet: Enhanced CenterNet with Contrastive Learning for Automatic
Polyp Detection [88.4359020192429]
Existing methods either involve computationally expensive context aggregation or lack prior modeling of polyps, resulting in poor performance in challenging cases.
In this paper, we propose the Enhanced CenterNet with Contrastive Learning (ECC-PolypDet), a two-stage training & end-to-end inference framework.
Box-assisted Contrastive Learning (BCL) during training to minimize the intra-class difference and maximize the inter-class difference between foreground polyps and backgrounds, enabling our model to capture concealed polyps.
In the fine-tuning stage, we introduce the IoU-guided Sample Re-weighting
arXiv Detail & Related papers (2024-01-10T07:03:41Z) - ScribblePolyp: Scribble-Supervised Polyp Segmentation through Dual
Consistency Alignment [9.488599217305625]
We introduce ScribblePolyp, a novel scribble-supervised polyp segmentation framework.
Unlike fully-supervised models, ScribblePolyp only requires the annotation of two lines (scribble labels) for each image.
Despite the coarse nature of scribble labels, which leave a substantial portion of pixels unlabeled, we propose a two-branch consistency alignment approach.
arXiv Detail & Related papers (2023-11-09T03:23:25Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - Stepwise Feature Fusion: Local Guides Global [14.394421688712052]
We propose a new State-Of-The-Art model for medical image segmentation, the SSFormer, which uses a pyramid Transformer encoder to improve the generalization ability of models.
Our proposed Progressive Locality Decoder can be adapted to the pyramid Transformer backbone to emphasize local features and attention dispersion.
arXiv Detail & Related papers (2022-03-07T10:36:38Z) - Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers [124.01928050651466]
We propose a new type of polyp segmentation method, named Polyp-PVT.
The proposed model, named Polyp-PVT, effectively suppresses noises in the features and significantly improves their expressive capabilities.
arXiv Detail & Related papers (2021-08-16T07:09:06Z) - Automatic Polyp Segmentation via Multi-scale Subtraction Network [100.94922587360871]
In clinical practice, precise polyp segmentation provides important information in the early detection of colorectal cancer.
Most existing methods are based on U-shape structure and use element-wise addition or concatenation to fuse different level features progressively in decoder.
We propose a multi-scale subtraction network (MSNet) to segment polyp from colonoscopy image.
arXiv Detail & Related papers (2021-08-11T07:54:07Z) - PraNet: Parallel Reverse Attention Network for Polyp Segmentation [155.93344756264824]
We propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images.
We first aggregate the features in high-level layers using a parallel partial decoder (PPD)
In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues.
arXiv Detail & Related papers (2020-06-13T08:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.