Trustworthiness for an Ultra-Wideband Localization Service
- URL: http://arxiv.org/abs/2408.05527v1
- Date: Sat, 10 Aug 2024 11:57:10 GMT
- Title: Trustworthiness for an Ultra-Wideband Localization Service
- Authors: Philipp Peterseil, Bernhard Etzlinger, Jan Horáček, Roya Khanzadeh, Andreas Springer,
- Abstract summary: This paper proposes a holistic trustworthiness assessment framework for ultra-wideband self-localization.
Our goal is to provide guidance for evaluating a system's trustworthiness based on objective evidence.
Our approach guarantees that the resulting trustworthiness indicators correspond to chosen real-world threats.
- Score: 2.4979362117484714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trustworthiness assessment is an essential step to assure that interdependent systems perform critical functions as anticipated, even under adverse conditions. In this paper, a holistic trustworthiness assessment framework for ultra-wideband self-localization is proposed, including attributes of reliability, security, privacy, and resilience. Our goal is to provide guidance for evaluating a system's trustworthiness based on objective evidence, so-called trustworthiness indicators. These indicators are carefully selected through the threat analysis of the particular system. Our approach guarantees that the resulting trustworthiness indicators correspond to chosen real-world threats. Moreover, experimental evaluations are conducted to demonstrate the effectiveness of the proposed method. While the framework is tailored for this specific use case, the process itself serves as a versatile template, which can be used in other applications in the domains of the Internet of Things or cyber-physical systems.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Quantifying calibration error in modern neural networks through evidence based theory [0.0]
This paper introduces a novel framework for quantifying the trustworthiness of neural networks by incorporating subjective logic into the evaluation of Expected Error (ECE)
We demonstrate the effectiveness of this approach through experiments on MNIST and CIFAR-10 datasets where post-calibration results indicate improved trustworthiness.
The proposed framework offers a more interpretable and nuanced assessment of AI models, with potential applications in sensitive domains such as healthcare and autonomous systems.
arXiv Detail & Related papers (2024-10-31T23:54:21Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Boosting Adversarial Robustness using Feature Level Stochastic Smoothing [46.86097477465267]
adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
In this work, we propose a generic method for introducingity in the network predictions.
We also utilize this for smoothing decision rejecting low confidence predictions.
arXiv Detail & Related papers (2023-06-10T15:11:24Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Assessing Trustworthiness of Autonomous Systems [0.0]
As Autonomous Systems (AS) become more ubiquitous in society, more responsible for our safety and our interaction with them more frequent, it is essential that they are trustworthy.
Assessing the trustworthiness of AS is a mandatory challenge for the verification and development community.
This will require appropriate standards and suitable metrics that may serve to objectively and comparatively judge trustworthiness of AS across the broad range of current and future applications.
arXiv Detail & Related papers (2023-05-05T10:26:16Z) - Demonstrating Software Reliability using Possibly Correlated Tests:
Insights from a Conservative Bayesian Approach [2.152298082788376]
We formalise informal notions of "doubting" that the executions are independent.
We develop techniques that reveal the extent to which independence assumptions can undermine conservatism in assessments.
arXiv Detail & Related papers (2022-08-16T20:27:47Z) - An evaluation of word-level confidence estimation for end-to-end
automatic speech recognition [70.61280174637913]
We investigate confidence estimation for end-to-end automatic speech recognition (ASR)
We provide an extensive benchmark of popular confidence methods on four well-known speech datasets.
Our results suggest a strong baseline can be obtained by scaling the logits by a learnt temperature.
arXiv Detail & Related papers (2021-01-14T09:51:59Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Quantifying Assurance in Learning-enabled Systems [3.0938904602244355]
Dependability assurance of systems embedding machine learning components is a key step for their use in safety-critical applications.
This paper develops a quantitative notion of assurance that an LES is dependable, as a core component of its assurance case.
We illustrate the utility of assurance measures by application to a real world autonomous aviation system.
arXiv Detail & Related papers (2020-06-18T08:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.