Trust Management in the Internet of Everything
- URL: http://arxiv.org/abs/2212.14688v2
- Date: Sun, 26 Mar 2023 21:43:09 GMT
- Title: Trust Management in the Internet of Everything
- Authors: Barbora Buhnova
- Abstract summary: This tutorial paper discusses the essential elements of trust management in complex digital ecosystems.
It explains how trust-building can be leveraged to support people in safe interaction with other (possibly autonomous) digital agents.
- Score: 6.24907186790431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digitalization is leading us towards a future where people, processes, data
and things are not only interacting with each other, but might start forming
societies on their own. In these dynamic systems enhanced by artificial
intelligence, trust management on the level of human-to-machine as well as
machine-to-machine interaction becomes an essential ingredient in supervising
safe and secure progress of our digitalized future. This tutorial paper
discusses the essential elements of trust management in complex digital
ecosystems, guiding the reader through the definitions and core concepts of
trust management. Furthermore, it explains how trust-building can be leveraged
to support people in safe interaction with other (possibly autonomous) digital
agents, as trust governance may allow the ecosystem to trigger an auto-immune
response towards untrusted digital agents, protecting human safety.
Related papers
- Agentic Web: Weaving the Next Web with AI Agents [109.13815627467514]
The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web.<n>In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users.<n>We present a structured framework for understanding and building the Agentic Web.
arXiv Detail & Related papers (2025-07-28T17:58:12Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication [4.839478919041786]
VizTrust is a real-time visual analytics tool that captures user trust dynamics in human-agent communication.
It enables stakeholders to observe trust formation as it happens, identify patterns in trust development, and pinpoint specific interaction elements that influence trust.
arXiv Detail & Related papers (2025-03-10T13:00:41Z) - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [333.9220561243189]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Conceptualizing Trustworthiness and Trust in Communications [17.69113057959175]
We present a novel holistic approach on how to tackle trustworthiness systematically in the context of communications.
We propose a first attempt to incorporate objective system properties and subjective beliefs to establish trustworthiness-based trust, in particular in the context of the future Tactile Internet connecting robotic devices.
arXiv Detail & Related papers (2024-07-23T06:11:13Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - The Ecosystem of Trust (EoT): Enabling effective deployment of
autonomous systems through collaborative and trusted ecosystems [0.0]
We propose an ecosystem of trust approach to support deployment of technology.
We argue that assurance, defined as grounds for justified confidence, is a prerequisite to enable the approach.
arXiv Detail & Related papers (2023-12-01T14:47:36Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Distributed Trust Through the Lens of Software Architecture [13.732161898452377]
This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
arXiv Detail & Related papers (2023-05-25T06:53:18Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Towards a trustful digital world: exploring self-sovereign identity
ecosystems [4.266530973611429]
Self-sovereign identity (SSI) solutions rely on distributed ledger technologies and verifiable credentials.
This paper builds on observations gathered in a field study to identify the building blocks, antecedents and possible outcomes of SSI ecosystems.
arXiv Detail & Related papers (2021-05-26T08:56:22Z) - Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration [2.6381163133447836]
Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
arXiv Detail & Related papers (2021-04-22T16:11:22Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.