A fuzzy reward and punishment scheme for vehicular ad hoc networks
- URL: http://arxiv.org/abs/2405.04892v1
- Date: Wed, 8 May 2024 08:55:39 GMT
- Title: A fuzzy reward and punishment scheme for vehicular ad hoc networks
- Authors: Rezvi Shahariar, Chris Phillips,
- Abstract summary: Trust models evaluate messages to assign reward or punishment.
This can be used to influence a driver's future behaviour.
New fuzzy RSU controller considers the severity of incident, driver past behaviour, and RSU confidence to determine the reward or punishment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trust management is an important security approach for the successful implementation of Vehicular Ad Hoc Networks (VANETs). Trust models evaluate messages to assign reward or punishment. This can be used to influence a driver's future behaviour. In the author's previous work, a sender side based trust management framework is developed which avoids the receiver evaluation of messages. However, this does not guarantee that a trusted driver will not lie. These "untrue attacks" are resolved by the RSUs using collaboration to rule on a dispute, providing a fixed amount of reward and punishment. The lack of sophistication is addressed in this paper with a novel fuzzy RSU controller considering the severity of incident, driver past behaviour, and RSU confidence to determine the reward or punishment for the conflicted drivers. Although any driver can lie in any situation, it is expected that trustworthy drivers are more likely to remain so, and vice versa. This behaviour is captured in a Markov chain model for sender and reporter drivers where their lying characteristics depend on trust score and trust state. Each trust state defines the driver's likelihood of lying using different probability distribution. An extensive simulation is performed to evaluate the performance of the fuzzy assessment and examine the Markov chain driver behaviour model with changing the initial trust score of all or some drivers in Veins simulator. The fuzzy and the fixed RSU assessment schemes are compared, and the result shows that the fuzzy scheme can encourage drivers to improve their behaviour.
Related papers
- A Survey of Security Threats and Trust Management in Vehicular Ad Hoc Networks [0.0]
Trust management plays an essential role in isolating malicious insider attacks in VANETs.<n>This paper first reviews, classifies, and summarizes state-of-the-art trust models, and then compares their achievements.
arXiv Detail & Related papers (2026-02-06T11:12:21Z) - CoReVLA: A Dual-Stage End-to-End Autonomous Driving Framework for Long-Tail Scenarios via Collect-and-Refine [73.74077186298523]
CoReVLA is a continual learning framework for autonomous driving.<n>It improves the performance in long-tail scenarios through a dual-stage process of data Collection and behavior Refinement.<n>CoReVLA achieves a Driving Score (DS) of 72.18 and a Success Rate (SR) of 50%, outperforming state-of-the-art methods by 7.96 DS and 15% SR under long-tail, safety-critical scenarios.
arXiv Detail & Related papers (2025-09-19T13:25:56Z) - How Much Is Too Much? Adaptive, Context-Aware Risk Detection in Naturalistic Driving [0.6299766708197883]
We propose a unified, context-aware framework that adapts labels and models over time and across drivers.<n>The framework is tested using two safety indicators, speed-weighted headway and harsh driving events, and three models: Random Forest, XGBoost, and Deep Neural Network (DNN)<n>Overall, the framework shows promise for adaptive, context-aware risk detection that can enhance real-time safety feedback and support driver-focused interventions in intelligent transportation systems.
arXiv Detail & Related papers (2025-07-26T16:24:25Z) - AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving [106.0319745724181]
We introduce AutoTrust, a comprehensive trustworthiness benchmark for large vision-language models in autonomous driving (DriveVLMs)
We constructed the largest visual question-answering dataset for investigating trustworthiness issues in driving scenarios.
Our evaluations have unveiled previously undiscovered vulnerabilities of DriveVLMs to trustworthiness threats.
arXiv Detail & Related papers (2024-12-19T18:59:33Z) - A trust management framework for vehicular ad hoc networks [0.0]
Trust management is used to address attacks from authorized users in accordance with their trust score.
We propose a new Tamper-Proof Device (TPD) based trust management framework for controlling trust at the sender side vehicle.
arXiv Detail & Related papers (2024-05-08T08:35:48Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Vehicle lateral control using Machine Learning for automated vehicle
guidance [0.0]
Uncertainty in decision-making is crucial in the machine learning model used for a safety-critical system.
In this work, we design a vehicle's lateral controller using a machine-learning model.
arXiv Detail & Related papers (2023-03-14T19:14:24Z) - Is my Driver Observation Model Overconfident? Input-guided Calibration
Networks for Reliable and Interpretable Confidence Estimates [23.449073032842076]
Driver observation models are rarely deployed under perfect conditions.
We show that raw neural network-based approaches tend to significantly overestimate their prediction quality.
We introduce Calibrated Action Recognition with Input Guidance (CARING)-a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation.
arXiv Detail & Related papers (2022-04-10T12:43:58Z) - Sample-Efficient Safety Assurances using Conformal Prediction [57.92013073974406]
Early warning systems can provide alerts when an unsafe situation is imminent.
To reliably improve safety, these warning systems should have a provable false negative rate.
We present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics.
arXiv Detail & Related papers (2021-09-28T23:00:30Z) - Unsupervised Driver Behavior Profiling leveraging Recurrent Neural
Networks [6.8438089867929905]
We propose a novel approach to driver behavior profiling leveraging an unsupervised learning paradigm.
First, we cast the driver behavior profiling problem as anomaly detection.
Second, we established recurrent neural networks that predict the next feature vector given a sequence of feature vectors.
Third, we analyzed the optimal level of sequence length for identifying each aggressive driver behavior.
arXiv Detail & Related papers (2021-08-11T07:48:27Z) - Contingencies from Observations: Tractable Contingency Planning with
Learned Behavior Models [82.34305824719101]
Humans have a remarkable ability to make decisions by accurately reasoning about future events.
We develop a general-purpose contingency planner that is learned end-to-end using high-dimensional scene observations.
We show how this model can tractably learn contingencies from behavioral observations.
arXiv Detail & Related papers (2021-04-21T14:30:20Z) - DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data [17.156611944404883]
We present DeepTake, a novel deep neural network-based framework that predicts multiple aspects of takeover behavior.
Using features from vehicle data, driver biometrics, and subjective measurements, DeepTake predicts the driver's intention, time, and quality of takeover.
Results show that DeepTake reliably predicts the takeover intention, time, and quality, with an accuracy of 96%, 93%, and 83%, respectively.
arXiv Detail & Related papers (2020-12-31T04:24:46Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Maximizing Information Gain in Partially Observable Environments via
Prediction Reward [64.24528565312463]
This paper tackles the challenge of using belief-based rewards for a deep RL agent.
We derive the exact error between negative entropy and the expected prediction reward.
This insight provides theoretical motivation for several fields using prediction rewards.
arXiv Detail & Related papers (2020-05-11T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.