VSRQ: Quantitative Assessment Method for Safety Risk of Vehicle
Intelligent Connected System
- URL: http://arxiv.org/abs/2305.01898v1
- Date: Wed, 3 May 2023 05:08:56 GMT
- Title: VSRQ: Quantitative Assessment Method for Safety Risk of Vehicle
Intelligent Connected System
- Authors: Tian Zhang, Wenshan Guan, Hao Miao, Xiujie Huang, Zhiquan Liu, Chaonan
Wang, Quanlong Guan, Liangda Fang, Zhifei Duan
- Abstract summary: We develop a new model for vehicle risk assessment by combining I-FAHP with FCA clustering: VSRQ model.
We evaluate the model on OpenPilot and experimentally demonstrate the effectiveness of the VSRQ model in identifying the safety of vehicle intelligent connected systems.
- Score: 6.499974038759507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of intelligent connected in modern vehicles continues to expand,
and the functions of vehicles become more and more complex with the development
of the times. This has also led to an increasing number of vehicle
vulnerabilities and many safety issues. Therefore, it is particularly important
to identify high-risk vehicle intelligent connected systems, because it can
inform security personnel which systems are most vulnerable to attacks,
allowing them to conduct more thorough inspections and tests. In this paper, we
develop a new model for vehicle risk assessment by combining I-FAHP with FCA
clustering: VSRQ model. We extract important indicators related to vehicle
safety, use fuzzy cluster analys (FCA) combined with fuzzy analytic hierarchy
process (FAHP) to mine the vulnerable components of the vehicle intelligent
connected system, and conduct priority testing on vulnerable components to
reduce risks and ensure vehicle safety. We evaluate the model on OpenPilot and
experimentally demonstrate the effectiveness of the VSRQ model in identifying
the safety of vehicle intelligent connected systems. The experiment fully
complies with ISO 26262 and ISO/SAE 21434 standards, and our model has a higher
accuracy rate than other models. These results provide a promising new research
direction for predicting the security risks of vehicle intelligent connected
systems and provide typical application tasks for VSRQ. The experimental
results show that the accuracy rate is 94.36%, and the recall rate is 73.43%,
which is at least 14.63% higher than all other known indicators.
Related papers
- VARS: Vision-based Assessment of Risk in Security Systems [1.433758865948252]
In this study, we perform a comparative analysis of various machine learning and deep learning models to predict danger ratings in a custom dataset of 100 videos.
The danger ratings are classified into three categories: no alert (less than 7)and high alert (greater than equal to 7)
arXiv Detail & Related papers (2024-10-25T15:47:13Z) - On the Role of Attention Heads in Large Language Model Safety [64.51534137177491]
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented.
We propose a novel metric which tailored for multi-head attention, the Safety Head ImPortant Score (Ships) to assess the individual heads' contributions to model safety.
arXiv Detail & Related papers (2024-10-17T16:08:06Z) - Passenger hazard perception based on EEG signals for highly automated driving vehicles [23.322910031715583]
This study explores neural mechanisms in passenger-vehicle interactions, leading to the development of a Passenger Cognitive Model (PCM) and the Passenger EEG Decoding Strategy (PEDS)
Central to PEDS is a novel Convolutional Recurrent Neural Network (CRNN) that captures spatial and temporal EEG data patterns.
Our findings highlight the predictive power of pre-event EEG data, enhancing the detection of hazardous scenarios and offering a network-driven framework for safer autonomous vehicles.
arXiv Detail & Related papers (2024-08-29T07:32:30Z) - What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Identifying the Risks of LM Agents with an LM-Emulated Sandbox [68.26587052548287]
Language Model (LM) agents and tools enable a rich set of capabilities but also amplify potential risks.
High cost of testing these agents will make it increasingly difficult to find high-stakes, long-tailed risks.
We introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables the testing of LM agents against a diverse range of tools and scenarios.
arXiv Detail & Related papers (2023-09-25T17:08:02Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Co-Design of Out-of-Distribution Detectors for Autonomous Emergency
Braking Systems [4.406331747636832]
Learning enabled components (LECs) make incorrect decisions when presented with samples outside of their training distributions.
Out-of-distribution (OOD) detectors have been proposed to detect such samples, thereby acting as a safety monitor.
We formulate a co-design methodology that uses this risk model to find the design parameters for an OOD detector and LEC that decrease risk below that of the baseline system.
arXiv Detail & Related papers (2023-07-25T11:38:40Z) - Evolving Testing Scenario Generation Method and Intelligence Evaluation
Framework for Automated Vehicles [12.670180834651912]
This paper proposes an evolving scenario generation method that utilizes deep reinforcement learning (DRL) to create human-like BVs for testing and intelligence evaluation of automated vehicles (AVs)
The results demonstrate that the proposed evolving scenario exhibits the highest level of complexity compared to other baseline scenarios and has more than 85% similarity to naturalistic driving data.
arXiv Detail & Related papers (2023-06-12T14:26:12Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - MTH-IDS: A Multi-Tiered Hybrid Intrusion Detection System for Internet
of Vehicles [12.280524044112708]
A hybrid intrusion detection system (IDS) is proposed to detect both known and unknown attacks on vehicular networks.
The proposed system can detect various types of known attacks with 99.99% accuracy on the CAN-intrusion-dataset.
The average processing time of each data packet on a vehicle-level machine is less than 0.6 ms.
arXiv Detail & Related papers (2021-05-26T17:36:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.