Uncertainty Reduction for 3D Point Cloud Self-Supervised Traversability
Estimation
- URL: http://arxiv.org/abs/2211.11201v1
- Date: Mon, 21 Nov 2022 06:24:41 GMT
- Title: Uncertainty Reduction for 3D Point Cloud Self-Supervised Traversability
Estimation
- Authors: Jihwan Bae, Junwon Seo, Taekyung Kim, Hae-gon Jeon, Kiho Kwak and
Inwook Shim
- Abstract summary: Self-supervised traversability estimation suffers from congenital uncertainties that appear according to the scarcity of negative information.
We introduce a method to incorporate unlabeled data in order to leverage the uncertainty.
We evaluate our approach on our own dataset, Dtrail', which is composed of a wide variety of negative data.
- Score: 17.193700394066266
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traversability estimation in off-road environments requires a robust
perception system. Recently, approaches to learning a traversability estimation
from past vehicle experiences in a self-supervised manner are arising as they
can greatly reduce human labeling costs and labeling errors. Nonetheless, the
learning setting from self-supervised traversability estimation suffers from
congenital uncertainties that appear according to the scarcity of negative
information. Negative data are rarely harvested as the system can be severely
damaged while logging the data. To mitigate the uncertainty, we introduce a
method to incorporate unlabeled data in order to leverage the uncertainty.
First, we design a learning architecture that inputs query and support data.
Second, unlabeled data are assigned based on the proximity in the metric space.
Third, a new metric for uncertainty measures is introduced. We evaluated our
approach on our own dataset, `Dtrail', which is composed of a wide variety of
negative data.
Related papers
- Mitigating Distributional Shift in Semantic Segmentation via Uncertainty
Estimation from Unlabelled Data [19.000718685399935]
This work presents a segmentation network that can detect errors caused by challenging test domains without any additional annotation in a single forward pass.
We use easy-to-obtain, uncurated and unlabelled data to learn to perform uncertainty estimation selectively by enforcing consistency over data augmentation.
The proposed method, named Gamma-SSL, consistently outperforms uncertainty estimation and Out-of-Distribution (OoD) techniques on this difficult benchmark.
arXiv Detail & Related papers (2024-02-27T16:23:11Z) - Credible Teacher for Semi-Supervised Object Detection in Open Scene [106.25850299007674]
In Open Scene Semi-Supervised Object Detection (O-SSOD), unlabeled data may contain unknown objects not observed in the labeled data.
It is detrimental to the current methods that mainly rely on self-training, as more uncertainty leads to the lower localization and classification precision of pseudo labels.
We propose Credible Teacher, an end-to-end framework to prevent uncertain pseudo labels from misleading the model.
arXiv Detail & Related papers (2024-01-01T08:19:21Z) - Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks [101.56637264703058]
We show that a variational Bayesian neural network approach can be used to improve uncertainty estimates.
We propose a new measure of uncertainty for contrastive learning, that is based on the disagreement in likelihood due to different positive samples.
arXiv Detail & Related papers (2023-11-30T22:32:24Z) - Estimating Uncertainty in Landslide Segmentation Models [7.537865319452023]
Landslides are a recurring, widespread hazard. Preparation and mitigation efforts can be aided by a high-quality, large-scale dataset that covers global at-risk areas.
Recent automated efforts focus on deep learning models for landslide segmentation from satellite imagery.
Accurate and robust uncertainty estimates can enable low-cost oversight of auto-generated landslide databases to resolve errors, identify hard negative examples, and increase the size of labeled training data.
arXiv Detail & Related papers (2023-11-18T18:18:33Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - Training Uncertainty-Aware Classifiers with Conformalized Deep Learning [7.837881800517111]
Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty.
We develop a novel training algorithm that can lead to more dependable uncertainty estimates, without sacrificing predictive power.
arXiv Detail & Related papers (2022-05-12T05:08:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Can We Leverage Predictive Uncertainty to Detect Dataset Shift and
Adversarial Examples in Android Malware Detection? [20.96638126913256]
We re-design and build 24 Android malware detectors by transforming four off-the-shelf detectors with six calibration methods.
We quantify their uncertainties with nine metrics, including three metrics dealing with data imbalance.
It is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples.
arXiv Detail & Related papers (2021-09-20T16:16:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.