A biologically Inspired Trust Model for Open Multi-Agent Systems that is Resilient to Rapid Performance Fluctuations
- URL: http://arxiv.org/abs/2504.15301v1
- Date: Thu, 17 Apr 2025 08:21:54 GMT
- Title: A biologically Inspired Trust Model for Open Multi-Agent Systems that is Resilient to Rapid Performance Fluctuations
- Authors: Zoi Lygizou, Dimitris Kalles,
- Abstract summary: Existing trust models face challenges related to agent mobility, changing behaviors, and the cold start problem.<n>We introduce a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally.<n>This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges related to agent mobility, changing behaviors, and the cold start problem. To address these issues we introduced a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally. This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy. Despite these advantages, prior evaluations revealed limitations of our model in adapting to provider population changes and continuous performance fluctuations. This study proposes a novel algorithm, incorporating a self-classification mechanism for providers to detect performance drops potentially harmful for the service consumers. Simulation results demonstrate that the new algorithm outperforms its original version and FIRE, a well-known trust and reputation model, particularly in handling dynamic trustee behavior. While FIRE remains competitive under extreme environmental changes, the proposed algorithm demonstrates greater adaptability across various conditions. In contrast to existing trust modeling research, this study conducts a comprehensive evaluation of our model using widely recognized trust model criteria, assessing its resilience against common trust-related attacks while identifying strengths, weaknesses, and potential countermeasures. Finally, several key directions for future research are proposed.
Related papers
- Quantifying calibration error in modern neural networks through evidence based theory [0.0]
This paper introduces a novel framework for quantifying the trustworthiness of neural networks by incorporating subjective logic into the evaluation of Expected Error (ECE)
We demonstrate the effectiveness of this approach through experiments on MNIST and CIFAR-10 datasets where post-calibration results indicate improved trustworthiness.
The proposed framework offers a more interpretable and nuanced assessment of AI models, with potential applications in sensitive domains such as healthcare and autonomous systems.
arXiv Detail & Related papers (2024-10-31T23:54:21Z) - Using Deep Q-Learning to Dynamically Toggle between Push/Pull Actions in Computational Trust Mechanisms [0.0]
In previous work, we compared CA to FIRE, a well-known trust and reputation model, and found that CA is superior when the trustor population changes.
We frame this problem as a machine learning problem in a partially observable environment, where the presence of several dynamic factors is not known to the trustor.
We show that an adaptable agent is indeed capable of learning when to use each model and, thus, perform consistently in dynamic environments.
arXiv Detail & Related papers (2024-04-28T19:44:56Z) - A biologically inspired computational trust model for open multi-agent systems which is resilient to trustor population changes [0.0]
The study is based on CA, a proposed decentralized computational trust model inspired by synaptic plasticity and the formation of assemblies in the human brain.
We ran a series of simulations to compare CA model to FIRE, a well-established, decentralized trust and reputation model for open MAS.
The main finding is that FIRE is superior to changes in the trustee population, whereas CA is resilient to the trustor population changes.
arXiv Detail & Related papers (2024-04-13T10:56:32Z) - JAB: Joint Adversarial Prompting and Belief Augmentation [81.39548637776365]
We introduce a joint framework in which we probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation.
This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes.
arXiv Detail & Related papers (2023-11-16T00:35:54Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - Reliability in Semantic Segmentation: Are We on the Right Track? [15.0189654919665]
We analyze a broad variety of models, spanning from older ResNet-based architectures to novel transformers.
We find that while recent models are significantly more robust, they are not overall more reliable in terms of uncertainty estimation.
This is the first study on modern segmentation models focused on both robustness and uncertainty estimation.
arXiv Detail & Related papers (2023-03-20T17:38:24Z) - Understanding and Enhancing Robustness of Concept-based Models [41.20004311158688]
We study robustness of concept-based models to adversarial perturbations.
In this paper, we first propose and analyze different malicious attacks to evaluate the security vulnerability of concept based models.
We then propose a potential general adversarial training-based defense mechanism to increase robustness of these systems to the proposed malicious attacks.
arXiv Detail & Related papers (2022-11-29T10:43:51Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization [63.732639864601914]
This paper proposes Bernoulli Matrix Factorization (BeMF) to provide both prediction values and reliability values.
BeMF acts on model-based collaborative filtering rather than on memory-based filtering.
The more reliable a prediction is, the less liable it is to be wrong.
arXiv Detail & Related papers (2020-06-05T14:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.