VSRQ: Quantitative Assessment Method for Safety Risk of Vehicle
Intelligent Connected System
- URL: http://arxiv.org/abs/2305.01898v1
- Date: Wed, 3 May 2023 05:08:56 GMT
- Title: VSRQ: Quantitative Assessment Method for Safety Risk of Vehicle
Intelligent Connected System
- Authors: Tian Zhang, Wenshan Guan, Hao Miao, Xiujie Huang, Zhiquan Liu, Chaonan
Wang, Quanlong Guan, Liangda Fang, Zhifei Duan
- Abstract summary: We develop a new model for vehicle risk assessment by combining I-FAHP with FCA clustering: VSRQ model.
We evaluate the model on OpenPilot and experimentally demonstrate the effectiveness of the VSRQ model in identifying the safety of vehicle intelligent connected systems.
- Score: 6.499974038759507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of intelligent connected in modern vehicles continues to expand,
and the functions of vehicles become more and more complex with the development
of the times. This has also led to an increasing number of vehicle
vulnerabilities and many safety issues. Therefore, it is particularly important
to identify high-risk vehicle intelligent connected systems, because it can
inform security personnel which systems are most vulnerable to attacks,
allowing them to conduct more thorough inspections and tests. In this paper, we
develop a new model for vehicle risk assessment by combining I-FAHP with FCA
clustering: VSRQ model. We extract important indicators related to vehicle
safety, use fuzzy cluster analys (FCA) combined with fuzzy analytic hierarchy
process (FAHP) to mine the vulnerable components of the vehicle intelligent
connected system, and conduct priority testing on vulnerable components to
reduce risks and ensure vehicle safety. We evaluate the model on OpenPilot and
experimentally demonstrate the effectiveness of the VSRQ model in identifying
the safety of vehicle intelligent connected systems. The experiment fully
complies with ISO 26262 and ISO/SAE 21434 standards, and our model has a higher
accuracy rate than other models. These results provide a promising new research
direction for predicting the security risks of vehicle intelligent connected
systems and provide typical application tasks for VSRQ. The experimental
results show that the accuracy rate is 94.36%, and the recall rate is 73.43%,
which is at least 14.63% higher than all other known indicators.
Related papers
- The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1 [70.94607997570729]
We present a comprehensive safety assessment of OpenAI-o3 and DeepSeek-R1 reasoning models.
We investigate their susceptibility to adversarial attacks, such as jailbreaking and prompt injection, to assess their robustness in real-world applications.
arXiv Detail & Related papers (2025-02-18T09:06:07Z) - An Anomaly Detection System Based on Generative Classifiers for Controller Area Network [7.537220883022467]
Modern vehicles are susceptible to various types of attacks, enabling attackers to gain control and compromise safety-critical systems.
Several Intrusion Detection Systems (IDSs) have been proposed in the literature to detect such cyber-attacks on vehicles.
This paper introduces a novel generative classifier-based IDS for anomaly detection in automotive networks.
arXiv Detail & Related papers (2024-12-28T19:59:33Z) - Agent-SafetyBench: Evaluating the Safety of LLM Agents [72.92604341646691]
We introduce Agent-SafetyBench, a comprehensive benchmark to evaluate the safety of large language models (LLMs)
Agent-SafetyBench encompasses 349 interaction environments and 2,000 test cases, evaluating 8 categories of safety risks and covering 10 common failure modes frequently encountered in unsafe interactions.
Our evaluation of 16 popular LLM agents reveals a concerning result: none of the agents achieves a safety score above 60%.
arXiv Detail & Related papers (2024-12-19T02:35:15Z) - VARS: Vision-based Assessment of Risk in Security Systems [1.433758865948252]
In this study, we perform a comparative analysis of various machine learning and deep learning models to predict danger ratings in a custom dataset of 100 videos.
The danger ratings are classified into three categories: no alert (less than 7)and high alert (greater than equal to 7)
arXiv Detail & Related papers (2024-10-25T15:47:13Z) - Passenger hazard perception based on EEG signals for highly automated driving vehicles [23.322910031715583]
This study explores neural mechanisms in passenger-vehicle interactions, leading to the development of a Passenger Cognitive Model (PCM) and the Passenger EEG Decoding Strategy (PEDS)
Central to PEDS is a novel Convolutional Recurrent Neural Network (CRNN) that captures spatial and temporal EEG data patterns.
Our findings highlight the predictive power of pre-event EEG data, enhancing the detection of hazardous scenarios and offering a network-driven framework for safer autonomous vehicles.
arXiv Detail & Related papers (2024-08-29T07:32:30Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Co-Design of Out-of-Distribution Detectors for Autonomous Emergency
Braking Systems [4.406331747636832]
Learning enabled components (LECs) make incorrect decisions when presented with samples outside of their training distributions.
Out-of-distribution (OOD) detectors have been proposed to detect such samples, thereby acting as a safety monitor.
We formulate a co-design methodology that uses this risk model to find the design parameters for an OOD detector and LEC that decrease risk below that of the baseline system.
arXiv Detail & Related papers (2023-07-25T11:38:40Z) - Evolving Testing Scenario Generation Method and Intelligence Evaluation
Framework for Automated Vehicles [12.670180834651912]
This paper proposes an evolving scenario generation method that utilizes deep reinforcement learning (DRL) to create human-like BVs for testing and intelligence evaluation of automated vehicles (AVs)
The results demonstrate that the proposed evolving scenario exhibits the highest level of complexity compared to other baseline scenarios and has more than 85% similarity to naturalistic driving data.
arXiv Detail & Related papers (2023-06-12T14:26:12Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.