False Positive Detection and Prediction Quality Estimation for LiDAR
Point Cloud Segmentation
- URL: http://arxiv.org/abs/2110.15681v1
- Date: Fri, 29 Oct 2021 11:00:30 GMT
- Title: False Positive Detection and Prediction Quality Estimation for LiDAR
Point Cloud Segmentation
- Authors: Pascal Colling, Matthias Rottmann, Lutz Roese-Koerner, Hanno
Gottschalk
- Abstract summary: We present a novel post-processing tool for semantic segmentation of LiDAR point cloud data, called LidarMetaSeg.
We compute dispersion measures based on network probability outputs as well as feature measures based on point cloud input features and aggregate them on segment level.
These aggregated measures are used to train a meta classification model to predict whether a predicted segment is a false positive or not and a meta regression model to predict the segmentwise intersection over union.
- Score: 5.735035463793009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel post-processing tool for semantic segmentation of LiDAR
point cloud data, called LidarMetaSeg, which estimates the prediction quality
segmentwise. For this purpose we compute dispersion measures based on network
probability outputs as well as feature measures based on point cloud input
features and aggregate them on segment level. These aggregated measures are
used to train a meta classification model to predict whether a predicted
segment is a false positive or not and a meta regression model to predict the
segmentwise intersection over union. Both models can then be applied to
semantic segmentation inferences without knowing the ground truth. In our
experiments we use different LiDAR segmentation models and datasets and analyze
the power of our method. We show that our results outperform other standard
approaches.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Concurrent Misclassification and Out-of-Distribution Detection for
Semantic Segmentation via Energy-Based Normalizing Flow [0.0]
Recent semantic segmentation models accurately classify test-time examples that are similar to a training dataset distribution.
We propose a generative model for concurrent in-distribution misclassification (IDM) and OOD detection that relies on a normalizing flow framework.
FlowEneDet achieves promising results on Cityscapes, Cityscapes-C, FishyScapes and SegmentMeIfYouCan benchmarks in IDM/OOD detection when applied to pretrained DeepLabV3+ and SegFormer semantic segmentation models.
arXiv Detail & Related papers (2023-05-16T17:02:57Z) - Pixel-wise Gradient Uncertainty for Convolutional Neural Networks
applied to Out-of-Distribution Segmentation [0.43512163406552007]
We present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference.
Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality at negligible computational overhead.
arXiv Detail & Related papers (2023-03-13T08:37:59Z) - Memory-Based Meta-Learning on Non-Stationary Distributions [29.443692147512742]
Memory-based meta-learning is a technique for approximating Bayes-optimal predictors.
We show that memory-based neural models, including Transformers, LSTMs, and RNNs can learn to accurately approximate known Bayes-optimal algorithms.
arXiv Detail & Related papers (2023-02-06T19:08:59Z) - Change-point Detection and Segmentation of Discrete Data using Bayesian
Context Trees [7.090165638014331]
Building on the recently introduced Bayesian Context Trees (BCT) framework, the distributions of different segments in a discrete time series are described as variable-memory Markov chains.
Inference for the presence and location of change-points is then performed via Markov chain Monte Carlo sampling.
Results on both simulated and real-world data indicate that the proposed methodology performs better than or as well as state-of-the-art techniques.
arXiv Detail & Related papers (2022-03-08T19:03:21Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - MetaBox+: A new Region Based Active Learning Method for Semantic
Segmentation using Priority Maps [4.396860522241306]
We present a novel active learning method for semantic image segmentation, called MetaBox+.
For acquisition, we train a meta regression model to estimate the segment-wise Intersection over Union (IoU) of each predicted segment of unlabeled images.
We compare our method to entropy based methods, where we consider the entropy as uncertainty of the prediction.
arXiv Detail & Related papers (2020-10-05T09:36:47Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.