Does Redundancy in AI Perception Systems Help to Test for Super-Human
Automated Driving Performance?
- URL: http://arxiv.org/abs/2112.04758v1
- Date: Thu, 9 Dec 2021 08:40:31 GMT
- Title: Does Redundancy in AI Perception Systems Help to Test for Super-Human
Automated Driving Performance?
- Authors: Hanno Gottschalk, Matthias Rottmann and Maida Saltagic
- Abstract summary: This work reviews that it is nearly impossible to provide direct statistical evidence on the system level that this is actually the case.
A commonly used strategy therefore is the use of redundancy along with the proof of sufficient subsystems' performances.
- Score: 6.445605125467575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While automated driving is often advertised with better-than-human driving
performance, this work reviews that it is nearly impossible to provide direct
statistical evidence on the system level that this is actually the case. The
amount of labeled data needed would exceed dimensions of present day technical
and economical capabilities. A commonly used strategy therefore is the use of
redundancy along with the proof of sufficient subsystems' performances. As it
is known, this strategy is efficient especially for the case of subsystems
operating independently, i.e. the occurrence of errors is independent in a
statistical sense. Here, we give some first considerations and experimental
evidence that this strategy is not a free ride as the errors of neural networks
fulfilling the same computer vision task, at least for some cases, show
correlated occurrences of errors. This remains true, if training data,
architecture, and training are kept separate or independence is trained using
special loss functions. Using data from different sensors (realized by up to
five 2D projections of the 3D MNIST data set) in our experiments is more
efficiently reducing correlations, however not to an extent that is realizing
the potential of reduction of testing data that can be obtained for redundant
and statistically independent subsystems.
Related papers
- Overtake Detection in Trucks Using CAN Bus Signals: A Comparative Study of Machine Learning Methods [51.28632782308621]
We focus on overtake detection using Controller Area Network (CAN) bus data collected from five in-service trucks provided by the Volvo Group.<n>We evaluate three common classifiers for vehicle manoeuvre detection, Artificial Neural Networks (ANN), Random Forest (RF), and Support Vector Machines (SVM)<n>Our pertruck analysis also reveals that classification accuracy, especially for overtakes, depends on the amount of training data per vehicle.
arXiv Detail & Related papers (2025-07-01T09:20:41Z) - A Hybrid Framework for Real-Time Data Drift and Anomaly Identification Using Hierarchical Temporal Memory and Statistical Tests [14.37149160708975]
This paper proposes a novel hybrid framework combiningHierarchical Temporal Memory (HTM) and Sequential Probability Ratio Test (SPRT) for real-time data drift detection and anomaly identification.
Experimental evaluations demonstrate that the proposed method outperforms conventional drift detection techniques like the Kolmogorov-Smirnov (KS) test, Wasserstein distance, and Population Stability Index (PSI) in terms of accuracy, adaptability, and computational efficiency.
arXiv Detail & Related papers (2025-04-24T18:23:18Z) - A Versatile Influence Function for Data Attribution with Non-Decomposable Loss [3.1615846013409925]
We propose a Versatile Influence Function (VIF) that can be straightforwardly applied to machine learning models trained with any non-decomposable loss.
VIF represents a significant advancement in data attribution, enabling efficient influence-function-based attribution across a wide range of machine learning paradigms.
arXiv Detail & Related papers (2024-12-02T09:59:01Z) - Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting [4.220336689294245]
Recent studies have presented various machine unlearning algorithms to make a trained model unlearn the data to be forgotten.
We propose Distribution-Level Feature Distancing (DLFD), a novel method that efficiently forgets instances while preventing correlation collapse.
Our method synthesizes data samples so that the generated data distribution is far from the distribution of samples being forgotten in the feature space.
arXiv Detail & Related papers (2024-09-23T06:51:10Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - ST-GIN: An Uncertainty Quantification Approach in Traffic Data
Imputation with Spatio-temporal Graph Attention and Bidirectional Recurrent
United Neural Networks [18.66289473659838]
We propose an innovative deep learning approach for imputing missing data.
A graph attention architecture is employed to capture the spatial correlations present in traffic data.
A bidirectional neural network is utilized to learn temporal information.
arXiv Detail & Related papers (2023-05-10T22:15:40Z) - Self-Supervised Mental Disorder Classifiers via Time Reversal [0.0]
We demonstrate that a model trained on the time direction of functional neuro-imaging data could help in any downstream task.
We train a Deep Neural Network on Independent components derived from fMRI data using the Independent component analysis (ICA) technique.
We show that learning time direction helps a model learn some causal relation in fMRI data that helps in faster convergence.
arXiv Detail & Related papers (2022-11-29T17:24:43Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Reduced Robust Random Cut Forest for Out-Of-Distribution detection in
machine learning models [0.799536002595393]
Most machine learning-based regressors extract information from data collected via past observations of limited length to make predictions in the future.
When input to these trained models is data with significantly different statistical properties from data used for training, there is no guarantee of accurate prediction.
We introduce a novel approach for this detection process using a Reduced Robust Random Cut Forest data structure.
arXiv Detail & Related papers (2022-06-18T17:01:40Z) - Multi-Domain Joint Training for Person Re-Identification [51.73921349603597]
Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance.
It appears that collecting more training data from diverse environments tends to improve the ReID performance.
We propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors.
arXiv Detail & Related papers (2022-01-06T09:20:59Z) - AutoLoss: Automated Loss Function Search in Recommendations [34.27873944762912]
We propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates.
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
arXiv Detail & Related papers (2021-06-12T08:15:00Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.