Point Cloud Semantic Segmentation with Sparse and Inhomogeneous Annotations
- URL: http://arxiv.org/abs/2312.06259v2
- Date: Thu, 19 Dec 2024 14:38:47 GMT
- Title: Point Cloud Semantic Segmentation with Sparse and Inhomogeneous Annotations
- Authors: Zhiyi Pan, Nan Zhang, Wei Gao, Shan Liu, Ge Li,
- Abstract summary: We introduce the probability density function into the gradient sampling approximation method to analyze the impact of annotation sparsity and inhomogeneity under weakly supervised learning.
We propose an Adaptive Distribution Network capable of robust learning on arbitrarily distributed sparse annotations.
- Score: 38.678301947144305
- License:
- Abstract: Utilizing uniformly distributed sparse annotations, weakly supervised learning alleviates the heavy reliance on fine-grained annotations in point cloud semantic segmentation tasks. However, few works discuss the inhomogeneity of sparse annotations, albeit it is common in real-world scenarios. Therefore, this work introduces the probability density function into the gradient sampling approximation method to qualitatively analyze the impact of annotation sparsity and inhomogeneity under weakly supervised learning. Based on our analysis, we propose an Adaptive Annotation Distribution Network (AADNet) capable of robust learning on arbitrarily distributed sparse annotations. Specifically, we propose a label-aware point cloud downsampling strategy to increase the proportion of annotations involved in the training stage. Furthermore, we design the multiplicative dynamic entropy as the gradient calibration function to mitigate the gradient bias caused by non-uniformly distributed sparse annotations and explicitly reduce the epistemic uncertainty. Without any prior restrictions and additional information, our proposed method achieves comprehensive performance improvements at multiple label rates and different annotation distributions.
Related papers
- EAUWSeg: Eliminating annotation uncertainty in weakly-supervised medical image segmentation [4.334357692599945]
Weakly-supervised medical image segmentation is gaining traction as it requires only rough annotations rather than accurate pixel-to-pixel labels.
We propose a novel weak annotation method coupled with its learning framework EAUWSeg to eliminate the annotation uncertainty.
We show that EAUWSeg outperforms existing weakly-supervised segmentation methods.
arXiv Detail & Related papers (2025-01-03T06:21:02Z) - Annotation-Efficient Polyp Segmentation via Active Learning [45.59503015577479]
We propose a deep active learning framework for annotation-efficient polyp segmentation.
In practice, we measure the uncertainty of each sample by examining the similarity between features masked by the prediction map of the polyp and the background area.
We show that our proposed method achieved state-of-the-art performance compared to other competitors on both a public dataset and a large-scale in-house dataset.
arXiv Detail & Related papers (2024-03-21T12:25:17Z) - Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation [51.66997548477913]
We propose a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP)
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore.
The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset.
arXiv Detail & Related papers (2024-03-11T06:59:05Z) - Regressor-Segmenter Mutual Prompt Learning for Crowd Counting [70.49246560246736]
We propose mutual prompt learning (mPrompt) to solve bias and inaccuracy caused by annotation variance.
Experiments show that mPrompt significantly reduces the Mean Average Error (MAE)
arXiv Detail & Related papers (2023-12-04T07:53:59Z) - Multi-View Knowledge Distillation from Crowd Annotations for
Out-of-Domain Generalization [53.24606510691877]
We propose new methods for acquiring soft-labels from crowd-annotations by aggregating the distributions produced by existing methods.
We demonstrate that these aggregation methods lead to the most consistent performance across four NLP tasks on out-of-domain test sets.
arXiv Detail & Related papers (2022-12-19T12:40:18Z) - End-to-End Label Uncertainty Modeling in Speech Emotion Recognition
using Bayesian Neural Networks and Label Distribution Learning [0.0]
We propose an end-to-end Bayesian neural network capable of being trained on a distribution of annotations to capture the subjectivity-based label uncertainty.
We show that the proposed t-distribution based approach achieves state-of-the-art uncertainty modeling results in speech emotion recognition.
arXiv Detail & Related papers (2022-09-30T12:55:43Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning from Crowds with Sparse and Imbalanced Annotations [29.596070201105274]
crowdsourcing has established itself as an efficient labeling solution through resorting to non-expert crowds.
One common practice is to distribute each instance to multiple workers, whereas each worker only annotates a subset of data, resulting in the it sparse annotation phenomenon.
We propose one self-training based approach named it Self-Crowd by progressively adding confident pseudo-annotations and rebalancing the annotation distribution.
arXiv Detail & Related papers (2021-07-11T13:06:20Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.