On Specifying for Trustworthiness
- URL: http://arxiv.org/abs/2206.11421v2
- Date: Sun, 20 Aug 2023 16:16:20 GMT
- Title: On Specifying for Trustworthiness
- Authors: Dhaminda B. Abeywickrama, Amel Bennaceur, Greg Chance, Yiannis
Demiris, Anastasia Kordoni, Mark Levine, Luke Moffat, Luc Moreau, Mohammad
Reza Mousavi, Bashar Nuseibeh, Subramanian Ramamoorthy, Jan Oliver Ringert,
James Wilson, Shane Windsor, Kerstin Eder
- Abstract summary: We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS.
We highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.
- Score: 39.845582350253515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As autonomous systems (AS) increasingly become part of our daily lives,
ensuring their trustworthiness is crucial. In order to demonstrate the
trustworthiness of an AS, we first need to specify what is required for an AS
to be considered trustworthy. This roadmap paper identifies key challenges for
specifying for trustworthiness in AS, as identified during the "Specifying for
Trustworthiness" workshop held as part of the UK Research and Innovation (UKRI)
Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS
domains with consideration of the resilience, trust, functionality,
verifiability, security, and governance and regulation of AS and identify some
of the key specification challenges in these domains. We then highlight the
intellectual challenges that are involved with specifying for trustworthiness
in AS that cut across domains and are exacerbated by the inherent uncertainty
involved with the environments in which AS need to operate.
Related papers
- Trustworthiness for an Ultra-Wideband Localization Service [2.4979362117484714]
This paper proposes a holistic trustworthiness assessment framework for ultra-wideband self-localization.
Our goal is to provide guidance for evaluating a system's trustworthiness based on objective evidence.
Our approach guarantees that the resulting trustworthiness indicators correspond to chosen real-world threats.
arXiv Detail & Related papers (2024-08-10T11:57:10Z) - When to Trust LLMs: Aligning Confidence with Response Quality [49.371218210305656]
We propose CONfidence-Quality-ORDer-preserving alignment approach (CONQORD)
It integrates quality reward and order-preserving alignment reward functions.
Experiments demonstrate that CONQORD significantly improves the alignment performance between confidence and response accuracy.
arXiv Detail & Related papers (2024-04-26T09:42:46Z) - TrustLLM: Trustworthiness in Large Language Models [446.5640421311468]
This paper introduces TrustLLM, a comprehensive study of trustworthiness in large language models (LLMs)
We first propose a set of principles for trustworthy LLMs that span eight different dimensions.
Based on these principles, we establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics.
arXiv Detail & Related papers (2024-01-10T22:07:21Z) - U-Trustworthy Models.Reliability, Competence, and Confidence in
Decision-Making [0.21756081703275998]
We present a precise mathematical definition of trustworthiness, termed $mathcalU$-trustworthiness.
Within the context of $mathcalU$-trustworthiness, we prove that properly-ranked models are inherently $mathcalU$-trustworthy.
We advocate for the adoption of the AUC metric as the preferred measure of trustworthiness.
arXiv Detail & Related papers (2024-01-04T04:58:02Z) - A Survey on Trustworthy Edge Intelligence: From Security and Reliability
To Transparency and Sustainability [32.959723590246384]
Edge Intelligence (EI) integrates Edge Computing (EC) and Artificial Intelligence (AI) to push the capabilities of AI to the network edge.
This survey comprehensively summarizes the characteristics, architecture, technologies, and solutions of trustworthy EI.
arXiv Detail & Related papers (2023-10-27T07:39:54Z) - Trustworthy Federated Learning: A Survey [0.5089078998562185]
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI)
We provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy.
We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy.
arXiv Detail & Related papers (2023-05-19T09:11:26Z) - Assessing Trustworthiness of Autonomous Systems [0.0]
As Autonomous Systems (AS) become more ubiquitous in society, more responsible for our safety and our interaction with them more frequent, it is essential that they are trustworthy.
Assessing the trustworthiness of AS is a mandatory challenge for the verification and development community.
This will require appropriate standards and suitable metrics that may serve to objectively and comparatively judge trustworthiness of AS across the broad range of current and future applications.
arXiv Detail & Related papers (2023-05-05T10:26:16Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.