A Robust Model for Trust Evaluation during Interactions between Agents
in a Sociable Environment
- URL: http://arxiv.org/abs/2104.08555v1
- Date: Sat, 17 Apr 2021 14:38:02 GMT
- Title: A Robust Model for Trust Evaluation during Interactions between Agents
in a Sociable Environment
- Authors: Qin Liang, Minjie Zhang, Fenghui Ren, Takayuki Ito
- Abstract summary: Trust evaluation is an important topic in both research and applications in sociable environments.
This paper presents a model for trust evaluation between agents by the combination of direct trust, indirect trust through neighbouring links and the reputation of an agent in the environment.
- Score: 9.520158869896395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trust evaluation is an important topic in both research and applications in
sociable environments. This paper presents a model for trust evaluation between
agents by the combination of direct trust, indirect trust through neighbouring
links and the reputation of an agent in the environment (i.e. social network)
to provide the robust evaluation. Our approach is typology independent from
social network structures and in a decentralized manner without a central
controller, so it can be used in broad domains.
Related papers
- Distributed Online Life-Long Learning (DOL3) for Multi-agent Trust and Reputation Assessment in E-commerce [4.060281689561971]
Trust and Reputation Assessment of service providers in citizen-focused environments like e-commerce is vital to maintain the integrity of the interactions among agents.
We propose a novel Distributed Online Life-Long Learning (DOL3) algorithm that involves real-time rapid learning of trust and reputation scores of providers.
arXiv Detail & Related papers (2024-10-21T21:37:55Z) - Using Deep Q-Learning to Dynamically Toggle between Push/Pull Actions in Computational Trust Mechanisms [0.0]
In previous work, we compared CA to FIRE, a well-known trust and reputation model, and found that CA is superior when the trustor population changes.
We frame this problem as a machine learning problem in a partially observable environment, where the presence of several dynamic factors is not known to the trustor.
We show that an adaptable agent is indeed capable of learning when to use each model and, thus, perform consistently in dynamic environments.
arXiv Detail & Related papers (2024-04-28T19:44:56Z) - A Factor Graph Model of Trust for a Collaborative Multi-Agent System [8.286807697708113]
Trust is the reliance and confidence an agent has in the information, behaviors, intentions, truthfulness, and capabilities of others within the system.
This paper introduces a new graphical approach that utilizes factor graphs to represent the interdependent behaviors and trustworthiness among agents.
Our method for evaluating trust is decentralized and considers key interdependent sub-factors such as proximity safety, consistency, and cooperation.
arXiv Detail & Related papers (2024-02-10T21:44:28Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - Trust-based Consensus in Multi-Agent Reinforcement Learning Systems [5.778852464898369]
This paper investigates the problem of unreliable agents in multi-agent reinforcement learning (MARL)
We propose Reinforcement Learning-based Trusted Consensus (RLTC), a decentralized trust mechanism.
We empirically demonstrate that our trust mechanism is able to handle unreliable agents effectively, as evidenced by higher consensus success rates.
arXiv Detail & Related papers (2022-05-25T15:58:34Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.