Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of
Information-Fusion-Enhanced AI Models based on Machine Learning
- URL: http://arxiv.org/abs/2305.14872v2
- Date: Wed, 31 May 2023 07:58:04 GMT
- Title: Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of
Information-Fusion-Enhanced AI Models based on Machine Learning
- Authors: Janek Gro{\ss}, Michael Kl\"as, Lisa J\"ockel, Pascal Gerber
- Abstract summary: We present a timeseries-aware uncertainty wrapper for dependable uncertainty estimates on timeseries data.
We show that it is possible to increase model accuracy through information fusion and additionally increase the quality of uncertainty estimates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the use of Artificial Intelligence (AI) components in cyber-physical
systems is becoming more common, the need for reliable system architectures
arises. While data-driven models excel at perception tasks, model outcomes are
usually not dependable enough for safety-critical applications. In this work,we
present a timeseries-aware uncertainty wrapper for dependable uncertainty
estimates on timeseries data. The uncertainty wrapper is applied in combination
with information fusion over successive model predictions in time. The
application of the uncertainty wrapper is demonstrated with a traffic sign
recognition use case. We show that it is possible to increase model accuracy
through information fusion and additionally increase the quality of uncertainty
estimates through timeseries-aware input quality features.
Related papers
- Uncertainty measurement for complex event prediction in safety-critical systems [0.36832029288386137]
Complex events processing (CEP) uncertainty is critical for embedded and safety-critical systems.
This paper exemplifies how we can measure uncertainty for the perception and prediction of events.
We present and discuss our results, which are very promising within our field of research and work.
arXiv Detail & Related papers (2024-11-02T15:51:37Z) - Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Uncertainty-aware Language Modeling for Selective Question Answering [107.47864420630923]
We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs.
Our approach is model- and data-agnostic, is computationally-efficient, and does not rely on external models or systems.
arXiv Detail & Related papers (2023-11-26T22:47:54Z) - ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation [25.67258563807856]
We propose a novel method called ALUM to handle the model uncertainty and data uncertainty in a unified scheme.
Our proposed ALUM is model-agnostic which can be easily implemented into any existing deep model with little extra overhead.
arXiv Detail & Related papers (2023-03-29T17:24:12Z) - Uncertainty in Real-Time Semantic Segmentation on Embedded Systems [22.018605089162204]
Application for semantic segmentation models in areas such as autonomous vehicles and human computer interaction require real-time predictive capabilities.
The challenges of addressing real-time application is amplified by the need to operate on resource constrained hardware.
This paper addresses this by combining deep feature extraction from pre-trained models with Bayesian regression and moment propagation for uncertainty aware predictions.
arXiv Detail & Related papers (2022-12-20T07:32:12Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Uncertainty-Aware Multiple Instance Learning fromLarge-Scale Long Time
Series Data [20.2087807816461]
This paper proposes an uncertainty-aware multiple instance (MIL) framework to identify the most relevant periodautomatically.
We further incorporate another modality toaccommodate unreliable predictions by training a separate model and conduct uncertainty aware fusion.
Empirical resultsdemonstrate that the proposed method can effectively detect thetypes of vessels based on the trajectory.
arXiv Detail & Related papers (2021-11-16T17:09:02Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.