Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on
Neural Representation and Self-Supervision on Neural Eigenspace
- URL: http://arxiv.org/abs/2102.09896v1
- Date: Fri, 19 Feb 2021 12:33:57 GMT
- Title: Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on
Neural Representation and Self-Supervision on Neural Eigenspace
- Authors: Zhiyi Pan, Peng Jiang, Yunhai Wang, Changhe Tu, Anthony G. Cohn
- Abstract summary: Scribble-supervised semantic segmentation has gained much attention recently for its promising performance without high-quality annotations.
This work aims to achieve semantic segmentation by scribble annotations directly without extra information and other limitations.
We propose holistic operations, including minimizing entropy and a network embedded random walk on neural representation to reduce uncertainty.
- Score: 21.321005898976253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scribble-supervised semantic segmentation has gained much attention recently
for its promising performance without high-quality annotations. Due to the lack
of supervision, confident and consistent predictions are usually hard to
obtain. Typically, people handle these problems to either adopt an auxiliary
task with the well-labeled dataset or incorporate the graphical model with
additional requirements on scribble annotations. Instead, this work aims to
achieve semantic segmentation by scribble annotations directly without extra
information and other limitations. Specifically, we propose holistic
operations, including minimizing entropy and a network embedded random walk on
neural representation to reduce uncertainty. Given the probabilistic transition
matrix of a random walk, we further train the network with self-supervision on
its neural eigenspace to impose consistency on predictions between related
images. Comprehensive experiments and ablation studies verify the proposed
approach, which demonstrates superiority over others; it is even comparable to
some full-label supervised ones and works well when scribbles are randomly
shrunk or dropped.
Related papers
- Leveraging Unlabeled Data for 3D Medical Image Segmentation through
Self-Supervised Contrastive Learning [3.7395287262521717]
Current 3D semi-supervised segmentation methods face significant challenges such as limited consideration of contextual information.
We introduce two distinctworks designed to explore and exploit the discrepancies between them, ultimately correcting the erroneous prediction results.
We employ a self-supervised contrastive learning paradigm to distinguish between reliable and unreliable predictions.
arXiv Detail & Related papers (2023-11-21T14:03:16Z) - ZScribbleSeg: Zen and the Art of Scribble Supervised Medical Image
Segmentation [16.188681108101196]
We propose to utilize solely scribble annotations for weakly supervised segmentation.
Existing solutions mainly leverage selective losses computed solely on annotated areas.
We introduce regularization terms to encode the spatial relationship and shape prior.
We integrate the efficient scribble supervision with the prior into a unified framework, denoted as ZScribbleSeg.
arXiv Detail & Related papers (2023-01-12T09:00:40Z) - Neighbour Consistency Guided Pseudo-Label Refinement for Unsupervised
Person Re-Identification [80.98291772215154]
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations.
Recent advances accomplish this task by leveraging clustering-based pseudo labels.
We propose a Neighbour Consistency guided Pseudo Label Refinement framework.
arXiv Detail & Related papers (2022-11-30T09:39:57Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Hypernet-Ensemble Learning of Segmentation Probability for Medical Image
Segmentation with Ambiguous Labels [8.841870931360585]
Deep Learning approaches are notoriously overconfident about their prediction with highly polarized label probability.
This is often not desirable for many applications with the inherent label ambiguity even in human annotations.
We propose novel methods to improve the segmentation probability estimation without sacrificing performance in a real-world scenario.
arXiv Detail & Related papers (2021-12-13T14:24:53Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - Affinity Attention Graph Neural Network for Weakly Supervised Semantic
Segmentation [86.44301443789763]
We propose Affinity Attention Graph Neural Network ($A2$GNN) for weakly supervised semantic segmentation.
Our approach achieves new state-of-the-art performances on Pascal VOC 2012 datasets.
arXiv Detail & Related papers (2021-06-08T02:19:21Z) - Self-Supervision by Prediction for Object Discovery in Videos [62.87145010885044]
In this paper, we use the prediction task as self-supervision and build a novel object-centric model for image sequence representation.
Our framework can be trained without the help of any manual annotation or pretrained network.
Initial experiments confirm that the proposed pipeline is a promising step towards object-centric video prediction.
arXiv Detail & Related papers (2021-03-09T19:14:33Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Scribble-Supervised Semantic Segmentation by Random Walk on Neural
Representation and Self-Supervision on Neural Eigenspace [10.603823180750446]
This work aims to achieve semantic segmentation supervised by scribble label directly without auxiliary information and other intermediate manipulation.
We impose diffusion on neural representation by random walk and consistency on neural eigenspace by self-supervision.
The results demonstrate the superiority of the proposed method and are even comparable to some full-label supervised ones.
arXiv Detail & Related papers (2020-11-11T08:22:25Z) - Semi-Supervised Crowd Counting via Self-Training on Surrogate Tasks [50.78037828213118]
This paper tackles the semi-supervised crowd counting problem from the perspective of feature learning.
We propose a novel semi-supervised crowd counting method which is built upon two innovative components.
arXiv Detail & Related papers (2020-07-07T05:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.