Superpixel-Guided Label Softening for Medical Image Segmentation
- URL: http://arxiv.org/abs/2007.08897v1
- Date: Fri, 17 Jul 2020 10:55:59 GMT
- Title: Superpixel-Guided Label Softening for Medical Image Segmentation
- Authors: Hang Li, Dong Wei, Shilei Cao, Kai Ma, Liansheng Wang, and Yefeng
Zheng
- Abstract summary: We propose superpixel-based label softening for medical image segmentation.
We show that this method achieves overall superior segmentation performances to baseline and comparison methods for both 3D and 2D medical images.
- Score: 31.989873877526424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of objects of interest is one of the central tasks in medical
image analysis, which is indispensable for quantitative analysis. When
developing machine-learning based methods for automated segmentation, manual
annotations are usually used as the ground truth toward which the models learn
to mimic. While the bulky parts of the segmentation targets are relatively easy
to label, the peripheral areas are often difficult to handle due to ambiguous
boundaries and the partial volume effect, etc., and are likely to be labeled
with uncertainty. This uncertainty in labeling may, in turn, result in
unsatisfactory performance of the trained models. In this paper, we propose
superpixel-based label softening to tackle the above issue. Generated by
unsupervised over-segmentation, each superpixel is expected to represent a
locally homogeneous area. If a superpixel intersects with the annotation
boundary, we consider a high probability of uncertain labeling within this
area. Driven by this intuition, we soften labels in this area based on signed
distances to the annotation boundary and assign probability values within [0,
1] to them, in comparison with the original "hard", binary labels of either 0
or 1. The softened labels are then used to train the segmentation models
together with the hard labels. Experimental results on a brain MRI dataset and
an optical coherence tomography dataset demonstrate that this conceptually
simple and implementation-wise easy method achieves overall superior
segmentation performances to baseline and comparison methods for both 3D and 2D
medical images.
Related papers
- Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised
Semantic Segmentation and Localization [98.46318529630109]
We take inspiration from traditional spectral segmentation methods by reframing image decomposition as a graph partitioning problem.
We find that these eigenvectors already decompose an image into meaningful segments, and can be readily used to localize objects in a scene.
By clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions.
arXiv Detail & Related papers (2022-05-16T17:47:44Z) - Local contrastive loss with pseudo-label based self-training for
semi-supervised medical image segmentation [13.996217500923413]
Semi/self-supervised learning-based approaches exploit unlabeled data along with limited annotated data.
Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images.
We propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information.
arXiv Detail & Related papers (2021-12-17T17:38:56Z) - Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation [25.76014072179711]
We propose a novel approach to generate supervision for semi-supervised semantic segmentation.
We use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set.
We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images.
arXiv Detail & Related papers (2021-12-01T12:21:24Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot
Medical Image Segmentation [40.41161371507547]
We develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation.
Our method exploits only one labeled MRI image (named atlas) and a few unlabeled images.
Our method outperforms the state-of-the-art one-shot medical segmentation methods.
arXiv Detail & Related papers (2021-02-03T12:28:04Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Manifold-driven Attention Maps for Weakly Supervised Segmentation [9.289524646688244]
We propose a manifold driven attention-based network to enhance visual salient regions.
Our method generates superior attention maps directly during inference without the need of extra computations.
arXiv Detail & Related papers (2020-04-07T00:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.