A Point-Neighborhood Learning Framework for Nasal Endoscope Image Segmentation
- URL: http://arxiv.org/abs/2405.20044v2
- Date: Wed, 04 Dec 2024 07:35:37 GMT
- Title: A Point-Neighborhood Learning Framework for Nasal Endoscope Image Segmentation
- Authors: Pengyu Jie, Wanquan Liu, Chenqiang Gao, Yihui Wen, Rui He, Weiping Wen, Pengcheng Li, Jintao Zhang, Deyu Meng,
- Abstract summary: This paper proposes a simple yet efficient weakly semi-supervised method called the Point-Neighborhood Learning (PNL) framework.<n>In PNL, we propose a point-neighborhood supervision loss and a pseudo-label scoring mechanism to explicitly guide the model's training.<n>The proposed method significantly improves performance without increasing the parameters of the segmentation neural network.
- Score: 41.46663118326646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lesion segmentation on nasal endoscopic images is challenging due to its complex lesion features. Fully-supervised deep learning methods achieve promising performance with pixel-level annotations but impose a significant annotation burden on experts. Although weakly supervised or semi-supervised methods can reduce the labelling burden, their performance is still limited. Some weakly semi-supervised methods employ a novel annotation strategy that labels weak single-point annotations for the entire training set while providing pixel-level annotations for a small subset of the data. However, the relevant weakly semi-supervised methods only mine the limited information of the point itself, while ignoring its label property and surrounding reliable information. This paper proposes a simple yet efficient weakly semi-supervised method called the Point-Neighborhood Learning (PNL) framework. PNL incorporates the surrounding area of the point, referred to as the point-neighborhood, into the learning process. In PNL, we propose a point-neighborhood supervision loss and a pseudo-label scoring mechanism to explicitly guide the model's training. Meanwhile, we proposed a more reliable data augmentation scheme. The proposed method significantly improves performance without increasing the parameters of the segmentation neural network. Extensive experiments on the NPC-LES dataset demonstrate that PNL outperforms existing methods by a significant margin. Additional validation on colonoscopic polyp segmentation datasets confirms the generalizability of the proposed PNL.
Related papers
- DEPT: Deep Extreme Point Tracing for Ultrasound Image Segmentation [4.840123311457789]
We introduce Deep Extreme Point Tracing (DEPT) integrated with Feature-Guided Extreme Point Masking (FGEPM) algorithm for ultrasound image segmentation.
Our method generates pseudo labels by identifying the lowest-cost path that connects all extreme points on the feature map-based cost matrix.
arXiv Detail & Related papers (2025-03-19T14:32:14Z) - EAUWSeg: Eliminating annotation uncertainty in weakly-supervised medical image segmentation [4.334357692599945]
Weakly-supervised medical image segmentation is gaining traction as it requires only rough annotations rather than accurate pixel-to-pixel labels.
We propose a novel weak annotation method coupled with its learning framework EAUWSeg to eliminate the annotation uncertainty.
We show that EAUWSeg outperforms existing weakly-supervised segmentation methods.
arXiv Detail & Related papers (2025-01-03T06:21:02Z) - SP${ }^3$ : Superpixel-propagated pseudo-label learning for weakly semi-supervised medical image segmentation [10.127428696255848]
SuperPixel-Propagated Pseudo-label learning method is proposed to handle the inadequate supervisory information challenge in weakly semi-supervised segmentation.
Our method achieves state-of-the-art performance on both tumor and organ segmentation datasets under the WSSS setting.
arXiv Detail & Related papers (2024-11-18T15:14:36Z) - Annotation-Efficient Polyp Segmentation via Active Learning [45.59503015577479]
We propose a deep active learning framework for annotation-efficient polyp segmentation.
In practice, we measure the uncertainty of each sample by examining the similarity between features masked by the prediction map of the polyp and the background area.
We show that our proposed method achieved state-of-the-art performance compared to other competitors on both a public dataset and a large-scale in-house dataset.
arXiv Detail & Related papers (2024-03-21T12:25:17Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - Cyclic Learning: Bridging Image-level Labels and Nuclei Instance
Segmentation [19.526504045149895]
We propose a novel image-level weakly supervised method, called cyclic learning, to solve this problem.
Cyclic learning comprises a front-end classification task and a back-end semi-supervised instance segmentation task.
Experiments on three datasets demonstrate the good generality of our method, which outperforms other image-level weakly supervised methods for nuclei instance segmentation.
arXiv Detail & Related papers (2023-06-05T08:32:12Z) - Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud [69.36717778451667]
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations.
We propose an effective weakly supervised method containing two components to solve the problem.
The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods.
arXiv Detail & Related papers (2022-12-09T09:42:26Z) - Pointly-Supervised Panoptic Segmentation [106.68888377104886]
We propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation.
Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision.
We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them.
arXiv Detail & Related papers (2022-10-25T12:03:51Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Point-Teaching: Weakly Semi-Supervised Object Detection with Point
Annotations [81.02347863372364]
We present Point-Teaching, a weakly semi-supervised object detection framework.
Specifically, we propose a Hungarian-based point matching method to generate pseudo labels for point annotated images.
We propose a simple-yet-effective data augmentation, termed point-guided copy-paste, to reduce the impact of the unmatched points.
arXiv Detail & Related papers (2022-06-01T07:04:38Z) - Guided Point Contrastive Learning for Semi-supervised Point Cloud
Semantic Segmentation [90.2445084743881]
We present a method for semi-supervised point cloud semantic segmentation to adopt unlabeled point clouds in training to boost the model performance.
Inspired by the recent contrastive loss in self-supervised tasks, we propose the guided point contrastive loss to enhance the feature representation and model generalization ability.
arXiv Detail & Related papers (2021-10-15T16:38:54Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Few-Shot Semantic Segmentation Augmented with Image-Level Weak
Annotations [23.02986307143718]
Recent progress in fewshot semantic segmentation tackles the issue by only a few pixel-level annotated examples.
Our key idea is to learn a better prototype representation of the class by fusing the knowledge from the image-level labeled data.
We propose a new framework, called PAIA, to learn the class prototype representation in a metric space by integrating image-level annotations.
arXiv Detail & Related papers (2020-07-03T04:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.