On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective
- URL: http://arxiv.org/abs/2104.07853v1
- Date: Fri, 16 Apr 2021 02:12:13 GMT
- Title: On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective
- Authors: Anousheh Gholami, Nariman Torkzaban, John S. Baras
- Abstract summary: We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
- Score: 2.1055643409860734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing scale, complexity, and heterogeneity of the next
generation networked systems, seamless control, management, and security of
such systems becomes increasingly challenging. Many diverse applications have
driven interest in networked systems, including large-scale distributed
learning, multi-agent optimization, 5G service provisioning, and network
slicing, etc. In this paper, we propose trust as a measure to evaluate the
status of network agents and improve the decision-making process. We interpret
trust as a relation among entities that participate in various protocols. Trust
relations are based on evidence created by the interactions of entities within
a protocol and may be a composite of multiple metrics such as availability,
reliability, resilience, etc. depending on application context. We first
elaborate on the importance of trust as a metric and then present a
mathematical framework for trust computation and aggregation within a network.
Then we show in practice, how trust can be integrated into network
decision-making processes by presenting two examples. In the first example, we
show how utilizing the trust evidence can improve the performance and the
security of Federated Learning. Second, we show how a 5G network resource
provisioning framework can be improved when augmented with a trust-aware
decision-making scheme. We verify the validity of our trust-based approach
through simulations. Finally, we explain the challenges associated with
aggregating the trust evidence and briefly explain our ideas to tackle them.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy [11.246557832016238]
In safety-critical and contested environments, adversaries may infiltrate and compromise a number of agents.
We analyze state of the art multi-target tracking algorithms under this compromised agent threat model.
We design a trust estimation framework using hierarchical Bayesian updating.
arXiv Detail & Related papers (2024-03-25T17:17:35Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Distributed Trust Through the Lens of Software Architecture [13.732161898452377]
This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
arXiv Detail & Related papers (2023-05-25T06:53:18Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.