Distributed Trust Through the Lens of Software Architecture
- URL: http://arxiv.org/abs/2306.08056v1
- Date: Thu, 25 May 2023 06:53:18 GMT
- Title: Distributed Trust Through the Lens of Software Architecture
- Authors: Sin Kit Lo, Yue Liu, Guangsheng Yu, Qinghua Lu, Xiwei Xu, and Liming
Zhu
- Abstract summary: This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
- Score: 13.732161898452377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed trust is a nebulous concept that has evolved from different
perspectives in recent years. While one can attribute its current prominence to
blockchain and cryptocurrency, the distributed trust concept has been
cultivating progress in federated learning, trustworthy and responsible AI in
an ecosystem setting, data sharing, privacy issues across organizational
boundaries, and zero trust cybersecurity. This paper will survey the concept of
distributed trust in multiple disciplines. It will take a system/software
architecture point of view to look at trust redistribution/shift and the
associated tradeoffs in systems and applications enabled by distributed trust
technologies.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - The Ecosystem of Trust (EoT): Enabling effective deployment of
autonomous systems through collaborative and trusted ecosystems [0.0]
We propose an ecosystem of trust approach to support deployment of technology.
We argue that assurance, defined as grounds for justified confidence, is a prerequisite to enable the approach.
arXiv Detail & Related papers (2023-12-01T14:47:36Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.