Automating Function-Level TARA for Automotive Full-Lifecycle Security
- URL: http://arxiv.org/abs/2504.18083v1
- Date: Fri, 25 Apr 2025 05:19:02 GMT
- Title: Automating Function-Level TARA for Automotive Full-Lifecycle Security
- Authors: Yuqiao Yang, Yongzhao Zhang, Wenhao Liu, Jun Li, Pengtao Shi, DingYu Zhong, Jie Yang, Ting Chen, Sheng Cao, Yuntao Ren, Yongyue Wu, Xiaosong Zhang,
- Abstract summary: This paper introduces DefenseWeaver, the first system that automates function-level TARA using component-specific details and large language models (LLMs)<n>DefenseWeaver dynamically generates attack trees and risk evaluations from system configurations described in an extended OpenXSAM++ format.<n>We validated DefenseWeaver through deployment in four automotive security projects, where it identified 11 critical attack paths.<n>In comparison to human experts, DefenseWeaver outperformed manual attack tree generation across six assessment scenarios.
- Score: 21.31114134291877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As modern vehicles evolve into intelligent and connected systems, their growing complexity introduces significant cybersecurity risks. Threat Analysis and Risk Assessment (TARA) has therefore become essential for managing these risks under mandatory regulations. However, existing TARA automation methods rely on static threat libraries, limiting their utility in the detailed, function-level analyses demanded by industry. This paper introduces DefenseWeaver, the first system that automates function-level TARA using component-specific details and large language models (LLMs). DefenseWeaver dynamically generates attack trees and risk evaluations from system configurations described in an extended OpenXSAM++ format, then employs a multi-agent framework to coordinate specialized LLM roles for more robust analysis. To further adapt to evolving threats and diverse standards, DefenseWeaver incorporates Low-Rank Adaptation (LoRA) fine-tuning and Retrieval-Augmented Generation (RAG) with expert-curated TARA reports. We validated DefenseWeaver through deployment in four automotive security projects, where it identified 11 critical attack paths, verified through penetration testing, and subsequently reported and remediated by the relevant automakers and suppliers. Additionally, DefenseWeaver demonstrated cross-domain adaptability, successfully applying to unmanned aerial vehicles (UAVs) and marine navigation systems. In comparison to human experts, DefenseWeaver outperformed manual attack tree generation across six assessment scenarios. Integrated into commercial cybersecurity platforms such as UAES and Xiaomi, DefenseWeaver has generated over 8,200 attack trees. These results highlight its ability to significantly reduce processing time, and its scalability and transformative impact on cybersecurity across industries.
Related papers
- Safety and Security Risk Mitigation in Satellite Missions via Attack-Fault-Defense Trees [2.252059459291148]
This work presents a case study from Ascentio Technologies, a mission-critical system company in Argentina specializing in aerospace.
The main focus will be on the Ground Segment for the satellite project currently developed by the company.
This paper showcases the application of the Attack-Fault-Defense Tree framework, which integrates attack trees, fault trees, and defense mechanisms into a unified model.
arXiv Detail & Related papers (2025-04-01T17:24:43Z) - LLM-Assisted Proactive Threat Intelligence for Automated Reasoning [2.0427650128177]
This research presents a novel approach to enhance real-time cybersecurity threat detection and response.<n>We integrate large language models (LLMs) and Retrieval-Augmented Generation (RAG) systems with continuous threat intelligence feeds.
arXiv Detail & Related papers (2025-04-01T05:19:33Z) - Robust Intrusion Detection System with Explainable Artificial Intelligence [0.0]
Adversarial input can exploit machine learning (ML) models through standard interfaces.
Conventional defenses such as adversarial training are costly in computational terms and often fail to provide real-time detection.
We suggest a novel strategy for detecting and mitigating adversarial attacks using eXplainable Artificial Intelligence (XAI)
arXiv Detail & Related papers (2025-03-07T10:31:59Z) - Guardians of the Agentic System: Preventing Many Shots Jailbreak with Agentic System [0.8136541584281987]
This work uses three examination methods to detect rogue agents through a Reverse Turing Test and analyze deceptive alignment through multi-agent simulations.<n>We develop an anti-jailbreaking system by testing it with GEMINI 1.5 pro and llama-3.3-70B, deepseek r1 models.<n>The detection capabilities are strong such as 94% accuracy for GEMINI 1.5 pro yet the system suffers persistent vulnerabilities when under long attacks.
arXiv Detail & Related papers (2025-02-23T23:35:15Z) - SPADE: Enhancing Adaptive Cyber Deception Strategies with Generative AI and Structured Prompt Engineering [0.17999333451993949]
This study leverages Generative AI (GenAI) models to automate the creation of adaptive cyber deception ploys.
We introduce a systematic framework (SPADE) to address inherent challenges large language models pose to adaptive deceptions.
arXiv Detail & Related papers (2025-01-01T19:44:30Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [68.36528819227641]
This paper systematically quantifies the robustness of VLA-based robotic systems.<n>We introduce two untargeted attack objectives that leverage spatial foundations to destabilize robotic actions, and a targeted attack objective that manipulates the robotic trajectory.<n>We design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - From Sands to Mansions: Towards Automated Cyberattack Emulation with Classical Planning and Large Language Models [10.557417449327868]
There is a pressing need for a comprehensive and up-to-date cyberattack dataset to support threat-informed defense.<n>We propose Aurora, a system that autonomously emulates cyberattacks using third-party attack tools and threat intelligence reports.<n>We utilize Aurora to create a dataset containing over 1,000 attack chains.
arXiv Detail & Related papers (2024-07-24T01:33:57Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.