Learning with Weak Annotations for Robust Maritime Obstacle Detection
- URL: http://arxiv.org/abs/2206.13263v1
- Date: Mon, 27 Jun 2022 12:52:26 GMT
- Title: Learning with Weak Annotations for Robust Maritime Obstacle Detection
- Authors: Lojze \v{Z}ust and Matej Kristan
- Abstract summary: Maritime obstacle detection is crucial for safe navigation of autonomous boats.
Per-pixel ground truth labeling of such datasets is labor-intensive and expensive.
We propose a new scaffolding learning regime ( SLR) to train segmentation-based obstacle detection networks.
SLR trains an initial model from weak annotations, then alternates between re-estimating the segmentation pseudo labels and improving the network parameters.
- Score: 10.773819584718648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust maritime obstacle detection is crucial for safe navigation of
autonomous boats and timely collision avoidance. The current state-of-the-art
is based on deep segmentation networks trained on large datasets. Per-pixel
ground truth labeling of such datasets, however, is labor-intensive and
expensive. We propose a new scaffolding learning regime (SLR), that leverages
weak annotations consisting of water edge, horizon and obstacle bounding boxes
to train segmentation-based obstacle detection networks, and thus reduces the
required ground truth labelling effort by twenty-fold. SLR trains an initial
model from weak annotations, then alternates between re-estimating the
segmentation pseudo labels and improving the network parameters. Experiments
show that maritime obstacle segmentation networks trained using SLR on weak
labels not only match, but outperform the same networks trained with dense
ground truth labels, which is a remarkable result. In addition to increased
accuracy, SLR also increases domain generalization and can be used for domain
adaptation with a low manual annotation load. The code and pre-trained models
are available at https://github.com/lojzezust/SLR .
Related papers
- Weakly Supervised LiDAR Semantic Segmentation via Scatter Image Annotation [38.715754110667916]
We implement LiDAR semantic segmentation using scatter image annotation.
We also propose ScatterNet, a network that includes three pivotal strategies to reduce the performance gap.
Our method requires less than 0.02% of the labeled points to achieve over 95% of the performance of fully-supervised methods.
arXiv Detail & Related papers (2024-04-19T13:01:30Z) - SwiMDiff: Scene-wide Matching Contrastive Learning with Diffusion
Constraint for Remote Sensing Image [21.596874679058327]
SwiMDiff is a novel self-supervised pre-training framework for remote sensing images.
It recalibrates labels to recognize data from the same scene as false negatives.
It seamlessly integrates contrastive learning (CL) with a diffusion model.
arXiv Detail & Related papers (2024-01-10T11:55:58Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised
Monocular Depth Estimation [19.229255297016635]
Self-supervised depth estimation can be trapped in a local minimum due to the gradient-locality issue of the photometric loss.
We present a framework to enhance depth by leveraging semantic segmentation to guide the network to jump out of the local minimum.
arXiv Detail & Related papers (2023-03-31T17:20:27Z) - Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud [69.36717778451667]
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations.
We propose an effective weakly supervised method containing two components to solve the problem.
The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods.
arXiv Detail & Related papers (2022-12-09T09:42:26Z) - STEdge: Self-training Edge Detection with Multi-layer Teaching and
Regularization [15.579360385857129]
We study the problem of self-training edge detection, leveraging the untapped wealth of large-scale unlabeled image datasets.
We design a self-supervised framework with multi-layer regularization and self-teaching.
Our method attains 4.8% improvement for ODS and 5.8% for OIS when tested on the unseen BIPED dataset.
arXiv Detail & Related papers (2022-01-13T18:26:36Z) - Learning Maritime Obstacle Detection from Weak Annotations by
Scaffolding [15.80122710743313]
Coastal water autonomous boats rely on robust perception methods for obstacle detection and timely collision avoidance.
Per-pixel ground truth labeling of such datasets is labor-intensive and expensive.
We propose a new scaffolding learning regime that allows training obstacle detection segmentation networks only from such weak annotations.
arXiv Detail & Related papers (2021-08-01T23:37:57Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - Dense Label Encoding for Boundary Discontinuity Free Rotation Detection [69.75559390700887]
This paper explores a relatively less-studied methodology based on classification.
We propose new techniques to push its frontier in two aspects.
Experiments and visual analysis on large-scale public datasets for aerial images show the effectiveness of our approach.
arXiv Detail & Related papers (2020-11-19T05:42:02Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.