Trust Management in the Internet of Everything
- URL: http://arxiv.org/abs/2212.14688v2
- Date: Sun, 26 Mar 2023 21:43:09 GMT
- Title: Trust Management in the Internet of Everything
- Authors: Barbora Buhnova
- Abstract summary: This tutorial paper discusses the essential elements of trust management in complex digital ecosystems.
It explains how trust-building can be leveraged to support people in safe interaction with other (possibly autonomous) digital agents.
- Score: 6.24907186790431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digitalization is leading us towards a future where people, processes, data
and things are not only interacting with each other, but might start forming
societies on their own. In these dynamic systems enhanced by artificial
intelligence, trust management on the level of human-to-machine as well as
machine-to-machine interaction becomes an essential ingredient in supervising
safe and secure progress of our digitalized future. This tutorial paper
discusses the essential elements of trust management in complex digital
ecosystems, guiding the reader through the definitions and core concepts of
trust management. Furthermore, it explains how trust-building can be leveraged
to support people in safe interaction with other (possibly autonomous) digital
agents, as trust governance may allow the ecosystem to trigger an auto-immune
response towards untrusted digital agents, protecting human safety.
Related papers
- Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack,
Defense, and Evaluation of Multi-agent System Safety [73.51336434996931]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - The Ecosystem of Trust (EoT): Enabling effective deployment of
autonomous systems through collaborative and trusted ecosystems [0.0]
We propose an ecosystem of trust approach to support deployment of technology.
We argue that assurance, defined as grounds for justified confidence, is a prerequisite to enable the approach.
arXiv Detail & Related papers (2023-12-01T14:47:36Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Distributed Trust Through the Lens of Software Architecture [13.732161898452377]
This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
arXiv Detail & Related papers (2023-05-25T06:53:18Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Towards a trustful digital world: exploring self-sovereign identity
ecosystems [4.266530973611429]
Self-sovereign identity (SSI) solutions rely on distributed ledger technologies and verifiable credentials.
This paper builds on observations gathered in a field study to identify the building blocks, antecedents and possible outcomes of SSI ecosystems.
arXiv Detail & Related papers (2021-05-26T08:56:22Z) - Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration [2.6381163133447836]
Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
arXiv Detail & Related papers (2021-04-22T16:11:22Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.