MUAD: Multiple Uncertainties for Autonomous Driving benchmark for
multiple uncertainty types and tasks
- URL: http://arxiv.org/abs/2203.01437v1
- Date: Wed, 2 Mar 2022 22:14:12 GMT
- Title: MUAD: Multiple Uncertainties for Autonomous Driving benchmark for
multiple uncertainty types and tasks
- Authors: Gianni Franchi, Xuanlong Yu, Andrei Bursuc, R\'emi Kazmierczak,
S\'everine Dubuisson, Emanuel Aldea, David Filliat
- Abstract summary: MUAD dataset consists of 8,500 realistic synthetic images with diverse adverse weather conditions.
This dataset allows to better assess the impact of different sources of uncertainty on model performance.
- Score: 10.624564056837835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive uncertainty estimation is essential for deploying Deep Neural
Networks in real-world autonomous systems. However, disentangling the different
types and sources of uncertainty is non trivial in most datasets, especially
since there is no ground truth for uncertainty. In addition, different degrees
of weather conditions can disrupt neural networks, resulting in inconsistent
training data quality. Thus, we introduce the MUAD dataset (Multiple
Uncertainties for Autonomous Driving), consisting of 8,500 realistic synthetic
images with diverse adverse weather conditions (night, fog, rain, snow),
out-of-distribution objects and annotations for semantic segmentation, depth
estimation, object and instance detection. MUAD allows to better assess the
impact of different sources of uncertainty on model performance. We propose a
study that shows the importance of having reliable Deep Neural Networks (DNNs)
in multiple experiments, and will release our dataset to allow researchers to
benchmark their algorithm methodically in ad-verse conditions. More information
and the download link for MUAD are available at https://muad-dataset.github.io/ .
Related papers
- Perception Datasets for Anomaly Detection in Autonomous Driving: A
Survey [4.731404257629232]
Multiple perception datasets have been created for the evaluation of anomaly detection methods.
This survey provides a structured and, to the best of our knowledge, complete overview and comparison of perception datasets for anomaly detection in autonomous driving.
arXiv Detail & Related papers (2023-02-06T14:07:13Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Contextual-Bandit Anomaly Detection for IoT Data in Distributed
Hierarchical Edge Computing [65.78881372074983]
IoT devices can hardly afford complex deep neural networks (DNN) models, and offloading anomaly detection tasks to the cloud incurs long delay.
We propose and build a demo for an adaptive anomaly detection approach for distributed hierarchical edge computing (HEC) systems.
We show that our proposed approach significantly reduces detection delay without sacrificing accuracy, as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-04-15T06:13:33Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.