A Survey of Security Threats and Trust Management in Vehicular Ad Hoc Networks
- URL: http://arxiv.org/abs/2602.06608v1
- Date: Fri, 06 Feb 2026 11:12:21 GMT
- Title: A Survey of Security Threats and Trust Management in Vehicular Ad Hoc Networks
- Authors: Rezvi Shahariar, Chris Phillips,
- Abstract summary: Trust management plays an essential role in isolating malicious insider attacks in VANETs.<n>This paper first reviews, classifies, and summarizes state-of-the-art trust models, and then compares their achievements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a survey of state-of-the-art trust models for Vehicular Ad Hoc Networks (VANETs). Trust management plays an essential role in isolating malicious insider attacks in VANETs which traditional security approaches fail to thwart. To this end, many trust models are presented; some of them only address trust management, while others address security and privacy aspects besides trust management. This paper first reviews, classifies, and summarizes state-of-the-art trust models, and then compares their achievements. From this literature survey, our reader will easily identify two broad classes of trust models that exist in literature, differing primarily in their evaluation point. For example, most trust models follow receiver-side trust evaluation and to the best of our knowledge, there is only one trust model for VANETs which evaluates trust at the sender-side unless a dispute arises. In the presence of a dispute, a Roadside Unit (RSU) rules on the validity of an event. In receiver-side trust models, each receiver becomes busy while computing the trust of a sender and its messages upon the messages' arrival. Conversely, in the sender-side class, receivers are free from any kind of computation as the trust is verified at the time the message is announced. Also, vehicles can quickly act on the information, such as taking a detour to an alternate route, as it supports fast decision-making. We provide a comparison between these two evaluation techniques using a sequence diagram. We then conclude the survey by suggesting future work for sender-side evaluation of trust in VANETs. Additionally, the challenges (real-time constraints and efficiency) are emphasized whilst considering the deployment of a trust model in VANETs
Related papers
- Eliciting Trustworthiness Priors of Large Language Models via Economic Games [2.2940141855172036]
We propose a novel elicitation method based on iterated in-context learning.<n>We find that GPT-4.1's trustworthiness priors closely track those observed in humans.<n>We show that variation in elicited trustworthiness can be well predicted by a stereotype-based model.
arXiv Detail & Related papers (2026-01-31T15:23:03Z) - Attention Knows Whom to Trust: Attention-based Trust Management for LLM Multi-Agent Systems [52.57826440085856]
Large Language Model-based Multi-Agent Systems (LLM-MAS) have demonstrated strong capabilities in solving complex tasks but remain vulnerable when agents receive unreliable messages.<n>This vulnerability stems from a fundamental gap: LLM agents treat all incoming messages equally without evaluating their trustworthiness.<n>We propose Attention Trust Score (A-Trust), a lightweight, attention-based method for evaluating message trustworthiness.
arXiv Detail & Related papers (2025-06-03T07:32:57Z) - AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving [106.0319745724181]
We introduce AutoTrust, a comprehensive trustworthiness benchmark for large vision-language models in autonomous driving (DriveVLMs)<n>We constructed the largest visual question-answering dataset for investigating trustworthiness issues in driving scenarios.<n>Our evaluations have unveiled previously undiscovered vulnerabilities of DriveVLMs to trustworthiness threats.
arXiv Detail & Related papers (2024-12-19T18:59:33Z) - Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - A fuzzy reward and punishment scheme for vehicular ad hoc networks [0.0]
Trust models evaluate messages to assign reward or punishment.
This can be used to influence a driver's future behaviour.
New fuzzy RSU controller considers the severity of incident, driver past behaviour, and RSU confidence to determine the reward or punishment.
arXiv Detail & Related papers (2024-05-08T08:55:39Z) - A trust management framework for vehicular ad hoc networks [0.0]
Trust management is used to address attacks from authorized users in accordance with their trust score.
We propose a new Tamper-Proof Device (TPD) based trust management framework for controlling trust at the sender side vehicle.
arXiv Detail & Related papers (2024-05-08T08:35:48Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z) - Towards Time-Aware Context-Aware Deep Trust Prediction in Online Social
Networks [0.4061135251278187]
Trust can be defined as a measure to determine which source of information is reliable and with whom we should share or from whom we should accept information.
There are several applications for trust in Online Social Networks (OSNs), including social spammer detection, fake news detection, retweet behaviour detection and recommender systems.
Trust prediction is the process of predicting a new trust relation between two users who are not currently connected.
arXiv Detail & Related papers (2020-03-21T01:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.