Enhancing Point Annotations with Superpixel and Confidence Learning
Guided for Improving Semi-Supervised OCT Fluid Segmentation
- URL: http://arxiv.org/abs/2306.02582v3
- Date: Thu, 30 Nov 2023 12:10:34 GMT
- Title: Enhancing Point Annotations with Superpixel and Confidence Learning
Guided for Improving Semi-Supervised OCT Fluid Segmentation
- Authors: Tengjin Weng, Yang Shen, Kai Jin, Zhiming Cheng, Yunxiang Li, Gewen
Zhang, Shuai Wang and Yaqi Wang
- Abstract summary: Superpixel and Confident Learning Guide Point s Network (SCLGPA-Net) based on the teacher-student architecture.
Superpixel-Guided Pseudo-Label Generation (SGPLG) module generates pseudo-labels and pixel-level label trust maps.
Confident Learning Guided Label Refinement (CLGLR) module identifies error information in the pseudo-labels and leads to further refinement.
- Score: 17.85298271262749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of fluid in Optical Coherence Tomography (OCT) images
is beneficial for ophthalmologists to make an accurate diagnosis. Although
semi-supervised OCT fluid segmentation networks enhance their performance by
introducing additional unlabeled data, the performance enhancement is limited.
To address this, we propose Superpixel and Confident Learning Guide Point
Annotations Network (SCLGPA-Net) based on the teacher-student architecture,
which can learn OCT fluid segmentation from limited fully-annotated data and
abundant point-annotated data. Specifically, we use points to annotate fluid
regions in unlabeled OCT images and the Superpixel-Guided Pseudo-Label
Generation (SGPLG) module generates pseudo-labels and pixel-level label trust
maps from the point annotations. The label trust maps provide an indication of
the reliability of the pseudo-labels. Furthermore, we propose the Confident
Learning Guided Label Refinement (CLGLR) module identifies error information in
the pseudo-labels and leads to further refinement. Experiments on the RETOUCH
dataset show that we are able to reduce the need for fully-annotated data by
94.22\%, closing the gap with the best fully supervised baselines to a mean IoU
of only 2\%. Furthermore, We constructed a private 2D OCT fluid segmentation
dataset for evaluation. Compared with other methods, comprehensive experimental
results demonstrate that the proposed method can achieve excellent performance
in OCT fluid segmentation.
Related papers
- Semi-supervised Medical Image Segmentation via Query Distribution
Consistency [3.733491537370078]
We propose a novel Dual KMax UX-Net framework that leverages labeled data to guide the extraction of information from unlabeled data.
Our approach is based on a mutual learning strategy that incorporates two modules: 3D UX-Net as our backbone and KMax decoder.
Our framework outperforms state-of-the-art semi-supervised learning methods on 10% and 20% labeled settings.
arXiv Detail & Related papers (2023-11-21T05:55:39Z) - Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for
Semi-Supervised Medical Image Segmentation [13.707121013895929]
We present a novel semi-supervised learning method, Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation.
We use distinct decoders for student and teacher networks while maintain the same encoder.
To learn from unlabeled data, we create pseudo-labels generated by the teacher networks and augment the training data with the pseudo-labels.
arXiv Detail & Related papers (2023-08-31T09:13:34Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Nuclei Segmentation with Point Annotations from Pathology Images via
Self-Supervised Learning and Co-Training [44.13451004973818]
We propose a weakly-supervised learning method for nuclei segmentation.
coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram.
A self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images.
arXiv Detail & Related papers (2022-02-16T17:08:44Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Learning to segment from misaligned and partial labels [0.0]
Many non-urban settings lack the ground-truth needed for accurate segmentation.
Open source infrastructure annotations like OpenStreetMaps (OSM) are representative of this issue.
We present a novel and generalizable two-stage framework that enables improved pixel-wise image segmentation given misaligned and missing annotations.
arXiv Detail & Related papers (2020-05-27T06:02:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.