Chain-of-Trust: A Progressive Trust Evaluation Framework Enabled by Generative AI
- URL: http://arxiv.org/abs/2506.17130v1
- Date: Fri, 20 Jun 2025 16:33:03 GMT
- Title: Chain-of-Trust: A Progressive Trust Evaluation Framework Enabled by Generative AI
- Authors: Botao Zhu, Xianbin Wang, Lei Zhang, Xuemin, Shen,
- Abstract summary: Chain-of-trust framework is proposed to make better use of device attribute data.<n>The framework divides the trust evaluation process into chained stages based on task decomposition.<n>generative AI is employed to analyze and interpret the collected data to produce correct evaluation results.
- Score: 20.02079841777494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In collaborative systems with complex tasks relying on distributed resources, trust evaluation of potential collaborators has emerged as an effective mechanism for task completion. However, due to the network dynamics and varying information gathering latencies, it is extremely challenging to observe and collect all trust attributes of a collaborating device concurrently for a comprehensive trust assessment. In this paper, a novel progressive trust evaluation framework, namely chain-of-trust, is proposed to make better use of misaligned device attribute data. This framework, designed for effective task completion, divides the trust evaluation process into multiple chained stages based on task decomposition. At each stage, based on the task completion process, the framework only gathers the latest device attribute data relevant to that stage, leading to reduced trust evaluation complexity and overhead. By leveraging advanced in-context learning, few-shot learning, and reasoning capabilities, generative AI is then employed to analyze and interpret the collected data to produce correct evaluation results quickly. Only devices deemed trustworthy at this stage proceed to the next round of trust evaluation. The framework ultimately determines devices that remain trustworthy across all stages. Experimental results demonstrate that the proposed framework achieves high accuracy in trust evaluation.
Related papers
- Semantic Chain-of-Trust: Autonomous Trust Orchestration for Collaborator Selection via Hypergraph-Aided Agentic AI [57.58120823855315]
This paper proposes an autonomous trust orchestration method based on a new concept of semantic chain-of-trust.<n>Our technique employs agentic AI and hypergraph to establish and maintain trust relationships among devices.<n> Experimental results demonstrate that the proposed method achieves resource-efficient trust evaluation.
arXiv Detail & Related papers (2025-07-31T13:53:25Z) - Rapid and Continuous Trust Evaluation for Effective Task Collaboration Through Siamese Model [9.467463634233177]
This paper proposes a Siamese-enabled rapid and continuous trust evaluation framework (SRCTE) to facilitate effective task collaboration.<n>A real system is built using two Dell EMC 5200 servers and a Google Pixel 8 to test the effectiveness of the proposed SRCTE framework.<n> Experimental results demonstrate that SRCTE converges rapidly with only a small amount of data and achieves a high anomaly trust detection rate.
arXiv Detail & Related papers (2025-06-20T16:30:59Z) - Aurora: Are Android Malware Classifiers Reliable and Stable under Distribution Shift? [51.12297424766236]
AURORA is a framework to evaluate malware classifiers based on their confidence quality and operational resilience.<n>AURORA is complemented by a set of metrics designed to go beyond point-in-time performance.<n>The fragility in SOTA frameworks across datasets of varying drift suggests the need for a return to the whiteboard.
arXiv Detail & Related papers (2025-05-28T20:22:43Z) - ClaimTrust: Propagation Trust Scoring for RAG Systems [7.7690689135107425]
ClaimTrust is a propagation-based trust scoring framework that dynamically evaluates the reliability of documents in a RAG system.<n>We preprocess and analyze 814 political news articles to extract 2,173 unique claims and classify 965 meaningful relationships.<n>ClaimTrust iteratively updates trust scores until convergence, effectively differentiating trustworthy articles from unreliable ones.
arXiv Detail & Related papers (2025-03-12T07:52:24Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - A Confidence-based Partial Label Learning Model for Crowd-Annotated
Named Entity Recognition [74.79785063365289]
Existing models for named entity recognition (NER) are mainly based on large-scale labeled datasets.
We propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.
arXiv Detail & Related papers (2023-05-21T15:31:23Z) - Trust, but Verify: Using Self-Supervised Probing to Improve
Trustworthiness [29.320691367586004]
We introduce a new approach of self-supervised probing, which enables us to check and mitigate the overconfidence issue for a trained model.
We provide a simple yet effective framework, which can be flexibly applied to existing trustworthiness-related methods in a plug-and-play manner.
arXiv Detail & Related papers (2023-02-06T08:57:20Z) - A Sentiment Analysis Dataset for Trustworthiness Evaluation [22.734197353027632]
Deep learning models are often criticized to be untrustworthy due to the black-box problem.
We release a novel and well-annotated sentiment analysis dataset to evaluate robustness and interpretability.
arXiv Detail & Related papers (2021-08-30T11:58:16Z) - An evaluation of word-level confidence estimation for end-to-end
automatic speech recognition [70.61280174637913]
We investigate confidence estimation for end-to-end automatic speech recognition (ASR)
We provide an extensive benchmark of popular confidence methods on four well-known speech datasets.
Our results suggest a strong baseline can be obtained by scaling the logits by a learnt temperature.
arXiv Detail & Related papers (2021-01-14T09:51:59Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.