A trust management framework for vehicular ad hoc networks
- URL: http://arxiv.org/abs/2405.04885v1
- Date: Wed, 8 May 2024 08:35:48 GMT
- Title: A trust management framework for vehicular ad hoc networks
- Authors: Rezvi Shahariar, Chris Phillips,
- Abstract summary: Trust management is used to address attacks from authorized users in accordance with their trust score.
We propose a new Tamper-Proof Device (TPD) based trust management framework for controlling trust at the sender side vehicle.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicular Ad Hoc Networks (VANETs) enable road users and public infrastructure to share information that improves the operation of roads and driver experience. However, these are vulnerable to poorly behaved authorized users. Trust management is used to address attacks from authorized users in accordance with their trust score. By removing the dissemination of trust metrics in the validation process, communication overhead and response time are lowered. In this paper, we propose a new Tamper-Proof Device (TPD) based trust management framework for controlling trust at the sender side vehicle that regulates driver behaviour. Moreover, the dissemination of feedback is only required when there is conflicting information in the VANET. If a conflict arises, the Road-Side Unit (RSU) decides, using the weighted voting system, whether the originator is to be believed, or not. The framework is evaluated against a centralized reputation approach and the results demonstrate that it outperforms the latter.
Related papers
- A Survey of Security Threats and Trust Management in Vehicular Ad Hoc Networks [0.0]
Trust management plays an essential role in isolating malicious insider attacks in VANETs.<n>This paper first reviews, classifies, and summarizes state-of-the-art trust models, and then compares their achievements.
arXiv Detail & Related papers (2026-02-06T11:12:21Z) - Critical or Compliant? The Double-Edged Sword of Reasoning in Chain-of-Thought Explanations [60.27156500679296]
We study the role of Chain-of-Thought (CoT) explanations in moral scenarios by systematically perturbing reasoning chains and manipulating delivery tones.<n>Our findings reveal two key effects: (1) users often trust with outcome agreement, sustaining reliance even when reasoning is flawed.<n>These results highlight how CoT explanations can simultaneously clarify and mislead, underscoring the need for NLP systems to provide explanations that encourage scrutiny and critical thinking rather than blind trust.
arXiv Detail & Related papers (2025-11-15T02:38:49Z) - Zero Trust-based Decentralized Identity Management System for Autonomous Vehicles [0.6131727058785479]
This paper presents a novel Zero Trust-based Decentralized Identity Management (D-IM) protocol for AVs.<n>By integrating the core principles of Zero Trust Architecture, "never trust, always verify", with the tamper resistant and decentralized nature of a blockchain network, our framework eliminates reliance on centralized authorities.<n>A comprehensive experimental evaluation, conducted across both urban and highway scenarios, validates the protocol's practicality.
arXiv Detail & Related papers (2025-09-29T22:42:51Z) - GOLIATH: A Decentralized Framework for Data Collection in Intelligent Transportation Systems [9.535698390424669]
GOLIATH is a decentralized framework that runs on the In-Vehicle Infotainment (IVI) system to collect real-time information exchanged between the network's participants.<n>We design the consensus mechanism resilient against a realistic set of adversaries that aim to tamper or disable the communication.
arXiv Detail & Related papers (2025-06-12T12:55:25Z) - TrustConnect: An In-Vehicle Anomaly Detection Framework through Topology-Based Trust Rating [0.0]
We propose TrustConnect, a framework designed to assess the trustworthiness of a vehicle's in-vehicle network.<n>The proposed framework leverages the interdependency of all the vehicle's components, along with the correlation of their values and their vulnerability to remote injection.<n>The effectiveness of the proposed framework has been validated through programming simulations conducted across various scenarios.
arXiv Detail & Related papers (2025-06-07T03:06:41Z) - AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving [106.0319745724181]
We introduce AutoTrust, a comprehensive trustworthiness benchmark for large vision-language models in autonomous driving (DriveVLMs)
We constructed the largest visual question-answering dataset for investigating trustworthiness issues in driving scenarios.
Our evaluations have unveiled previously undiscovered vulnerabilities of DriveVLMs to trustworthiness threats.
arXiv Detail & Related papers (2024-12-19T18:59:33Z) - Trust-Oriented Adaptive Guardrails for Large Language Models [9.719986610417441]
Guardrails are designed to ensure that large language models (LLMs) align with human values by moderating harmful or toxic responses.
This paper addresses a critical issue: existing guardrails lack a well-founded methodology to accommodate the diverse needs of different user groups.
We introduce an adaptive guardrail mechanism, to dynamically moderate access to sensitive content based on user trust metrics.
arXiv Detail & Related papers (2024-08-16T18:07:48Z) - Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - A fuzzy reward and punishment scheme for vehicular ad hoc networks [0.0]
Trust models evaluate messages to assign reward or punishment.
This can be used to influence a driver's future behaviour.
New fuzzy RSU controller considers the severity of incident, driver past behaviour, and RSU confidence to determine the reward or punishment.
arXiv Detail & Related papers (2024-05-08T08:55:39Z) - Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy [11.246557832016238]
In safety-critical and contested environments, adversaries may infiltrate and compromise a number of agents.
We analyze state of the art multi-target tracking algorithms under this compromised agent threat model.
We design a trust estimation framework using hierarchical Bayesian updating.
arXiv Detail & Related papers (2024-03-25T17:17:35Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Trust-aware Control for Intelligent Transportation Systems [0.20415910628419062]
We propose a framework for using the quantified trustworthiness of agents to enable trust-aware coordination and control.
We show how to synthesize trust-aware controllers using an approach based on reinforcement learning.
We develop a trust-aware version called AIM-Trust that leads to lower accident rates in scenarios consisting of a mixture of trusted and untrusted agents.
arXiv Detail & Related papers (2021-11-08T03:02:25Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.