HAVANA: Hard negAtiVe sAmples aware self-supervised coNtrastive leArning
for Airborne laser scanning point clouds semantic segmentation
- URL: http://arxiv.org/abs/2210.10626v1
- Date: Wed, 19 Oct 2022 15:05:17 GMT
- Title: HAVANA: Hard negAtiVe sAmples aware self-supervised coNtrastive leArning
for Airborne laser scanning point clouds semantic segmentation
- Authors: Yunsheng Zhang, Jianguo Yao, Ruixiang Zhang, Siyang Chen, Haifeng Li
- Abstract summary: This work proposes a hard-negative sample aware self-supervised contrastive learning method to pre-train the model for semantic segmentation.
The results obtained by the proposed HAVANA method still exceed 94% of the supervised paradigm performance with full training set.
- Score: 9.310873951428238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Network (DNN) based point cloud semantic segmentation has
presented significant achievements on large-scale labeled aerial laser point
cloud datasets. However, annotating such large-scaled point clouds is
time-consuming. Due to density variations and spatial heterogeneity of the
Airborne Laser Scanning (ALS) point clouds, DNNs lack generalization capability
and thus lead to unpromising semantic segmentation, as the DNN trained in one
region underperform when directly utilized in other regions. However,
Self-Supervised Learning (SSL) is a promising way to solve this problem by
pre-training a DNN model utilizing unlabeled samples followed by a fine-tuned
downstream task involving very limited labels. Hence, this work proposes a
hard-negative sample aware self-supervised contrastive learning method to
pre-train the model for semantic segmentation. The traditional contrastive
learning for point clouds selects the hardest negative samples by solely
relying on the distance between the embedded features derived from the learning
process, potentially evolving some negative samples from the same classes to
reduce the contrastive learning effectiveness. Therefore, we design an AbsPAN
(Absolute Positive And Negative samples) strategy based on k-means clustering
to filter the possible false-negative samples. Experiments on two typical ALS
benchmark datasets demonstrate that the proposed method is more appealing than
supervised training schemes without pre-training. Especially when the labels
are severely inadequate (10% of the ISPRS training set), the results obtained
by the proposed HAVANA method still exceed 94% of the supervised paradigm
performance with full training set.
Related papers
- Adaptive-Labeling for Enhancing Remote Sensing Cloud Understanding [40.572147431473034]
We introduce an innovative model-agnostic Cloud Adaptive-Labeling (CAL) approach, which operates iteratively to enhance the quality of training data annotations.
Our methodology commences by training a cloud segmentation model using the original annotations.
It introduces a trainable pixel intensity threshold for adaptively labeling the cloud training images on the fly.
The newly generated labels are then employed to fine-tune the model.
arXiv Detail & Related papers (2023-11-09T08:23:45Z) - KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training [2.8804804517897935]
We propose a method for hiding the least-important samples during the training of deep neural networks.
We adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process.
Our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline.
arXiv Detail & Related papers (2023-10-16T06:19:29Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - ScatterSample: Diversified Label Sampling for Data Efficient Graph
Neural Network Learning [22.278779277115234]
In some applications where graph neural network (GNN) training is expensive, labeling new instances is expensive.
We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting.
Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines.
arXiv Detail & Related papers (2022-06-09T04:05:02Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - Guided Point Contrastive Learning for Semi-supervised Point Cloud
Semantic Segmentation [90.2445084743881]
We present a method for semi-supervised point cloud semantic segmentation to adopt unlabeled point clouds in training to boost the model performance.
Inspired by the recent contrastive loss in self-supervised tasks, we propose the guided point contrastive loss to enhance the feature representation and model generalization ability.
arXiv Detail & Related papers (2021-10-15T16:38:54Z) - A new weakly supervised approach for ALS point cloud semantic
segmentation [1.4620086904601473]
We propose a deep-learning based weakly supervised framework for semantic segmentation of ALS point clouds.
We exploit potential information from unlabeled data subject to incomplete and sparse labels.
Our method achieves an overall accuracy of 83.0% and an average F1 score of 70.0%, which have increased by 6.9% and 12.8% respectively.
arXiv Detail & Related papers (2021-10-04T14:00:23Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.