A Large Language Model-Supported Threat Modeling Framework for Transportation Cyber-Physical Systems
- URL: http://arxiv.org/abs/2506.00831v1
- Date: Sun, 01 Jun 2025 04:33:34 GMT
- Title: A Large Language Model-Supported Threat Modeling Framework for Transportation Cyber-Physical Systems
- Authors: M Sabbir Salek, Mashrur Chowdhury, Muhaimin Bin Munir, Yuchen Cai, Mohammad Imtiaz Hasan, Jean-Michel Tine, Latifur Khan, Mizanur Rahman,
- Abstract summary: TraCR-TMF (Transportation Cybersecurity and Resiliency Threat Modeling Framework) is a large language model (LLM)-based framework that minimizes expert intervention.<n>It identifies threats, potential attack techniques, and corresponding countermeasures by leveraging the MITRE ATT&CK matrix.<n>TraCR-TMF also maps attack paths to critical assets by analyzing vulnerabilities using a customized LLM.
- Score: 11.872361272221244
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern transportation systems rely on cyber-physical systems (CPS), where cyber systems interact seamlessly with physical systems like transportation-related sensors and actuators to enhance safety, mobility, and energy efficiency. However, growing automation and connectivity increase exposure to cyber vulnerabilities. Existing threat modeling frameworks for transportation CPS are often limited in scope, resource-intensive, and dependent on significant cybersecurity expertise. To address these gaps, we present TraCR-TMF (Transportation Cybersecurity and Resiliency Threat Modeling Framework), a large language model (LLM)-based framework that minimizes expert intervention. TraCR-TMF identifies threats, potential attack techniques, and corresponding countermeasures by leveraging the MITRE ATT&CK matrix through three LLM-based approaches: (i) a retrieval-augmented generation (RAG) method requiring no expert input, (ii) an in-context learning approach requiring low expert input, and (iii) a supervised fine-tuning method requiring moderate expert input. TraCR-TMF also maps attack paths to critical assets by analyzing vulnerabilities using a customized LLM. The framework was evaluated in two scenarios. First, it identified relevant attack techniques across transportation CPS applications, with 90% precision as validated by experts. Second, using a fine-tuned LLM, it successfully predicted multiple exploitations including lateral movement, data exfiltration, and ransomware-related encryption that occurred during a major real-world cyberattack incident. These results demonstrate TraCR-TMF's effectiveness in CPS threat modeling, its reduced reliance on cybersecurity expertise, and its adaptability across CPS domains.
Related papers
- A Systematization of Security Vulnerabilities in Computer Use Agents [1.3560089220432787]
We conduct a systematic threat analysis and testing of real-world CUAs under adversarial conditions.<n>We identify seven classes of risks unique to the CUA paradigm, and analyze three concrete exploit scenarios in depth.<n>These case studies reveal deeper architectural flaws across current CUA implementations.
arXiv Detail & Related papers (2025-07-07T19:50:21Z) - Exploring Traffic Simulation and Cybersecurity Strategies Using Large Language Models [5.757331432762268]
This study presents a novel multi-agent framework to enhance traffic simulation and cybersecurity testing.<n>The framework automates the creation of traffic scenarios, the design of cyberattack strategies, and the development of defense mechanisms.<n>Results show a 10.2 percent increase in travel time during an attack, which is reduced by 3.3 percent with the defense strategy.
arXiv Detail & Related papers (2025-06-20T02:41:23Z) - Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges [4.6592774515395465]
We present a systematic application of the MITRE ATLAS (Adrial Threat Landscape for Artificial-Intelligence Systems) framework to assess attacks across different phases of the MLOps ecosystem.<n>We then present a structured taxonomy of attack techniques explicitly mapped to corresponding phases of the MLOps ecosystem.<n>This is followed by a taxonomy of mitigation strategies aligned with these attack categories, offering actionable early-stage defenses to strengthen the security of MLOps ecosystem.
arXiv Detail & Related papers (2025-05-30T17:45:31Z) - A Proposal for Evaluating the Operational Risk for ChatBots based on Large Language Models [39.58317527488534]
We propose a novel, instrumented risk-assessment metric that simultaneously evaluates potential threats to three key stakeholders.<n>To validate our metric, we leverage Garak, an open-source framework for vulnerability testing.<n>Results underscore the importance of multi-dimensional risk assessments in operationalizing secure, reliable AI-driven conversational systems.
arXiv Detail & Related papers (2025-05-07T20:26:45Z) - Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report [50.268821168513654]
We present Foundation-Sec-8B, a cybersecurity-focused large language model (LLMs) built on the Llama 3.1 architecture.<n>We evaluate it across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks.<n>By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
arXiv Detail & Related papers (2025-04-28T08:41:12Z) - Enhancing Network Security Management in Water Systems using FM-based Attack Attribution [43.48086726793515]
We propose a novel model-agnostic Factorization Machines (FM)-based approach that capitalizes on water system sensor-actuator interactions to provide granular explanations and attributions for cyber attacks.<n>In multi-feature cyber attack scenarios involving intricate sensor-actuator interactions, our FM-based attack attribution method effectively ranks attack root causes, achieving approximately 20% average improvement over SHAP and LEMNA.
arXiv Detail & Related papers (2025-03-03T06:52:00Z) - Cyber Defense Reinvented: Large Language Models as Threat Intelligence Copilots [36.809323735351825]
CYLENS is a cyber threat intelligence copilot powered by large language models (LLMs)<n>CYLENS is designed to assist security professionals throughout the entire threat management lifecycle.<n>It supports threat attribution, contextualization, detection, correlation, prioritization, and remediation.
arXiv Detail & Related papers (2025-02-28T07:16:09Z) - Simulation of Multi-Stage Attack and Defense Mechanisms in Smart Grids [2.0766068042442174]
We introduce a simulation environment that replicates the power grid's infrastructure and communication dynamics.<n>The framework generates diverse, realistic attack data to train machine learning algorithms for detecting and mitigating cyber threats.<n>It also provides a controlled, flexible platform to evaluate emerging security technologies, including advanced decision support systems.
arXiv Detail & Related papers (2024-12-09T07:07:17Z) - Global Challenge for Safe and Secure LLMs Track 1 [57.08717321907755]
The Global Challenge for Safe and Secure Large Language Models (LLMs) is a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO)
This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.
arXiv Detail & Related papers (2024-11-21T08:20:31Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Proactive security defense: cyber threat intelligence modeling for connected autonomous vehicles [18.956583063203738]
We suggest an automotive CTI modeling framework, Actim, to extract and analyse the interrelated relationships among cyber threat elements.
Experiment results show that the proposed BERT-DocHiatt-BiLSTM-LSTM model exceeds the performance of existing methods.
arXiv Detail & Related papers (2024-10-21T13:49:35Z) - Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks [82.3753728955968]
We introduce a novel Mixture-of-Experts (MoE)-based SemCom system.
This system comprises a gating network and multiple experts, each specializing in different security challenges.
The gating network adaptively selects suitable experts to counter heterogeneous attacks based on user-defined security requirements.
A case study in vehicular networks demonstrates the efficacy of the MoE-based SemCom system.
arXiv Detail & Related papers (2024-09-24T03:17:51Z) - Security Modelling for Cyber-Physical Systems: A Systematic Literature Review [7.3347982474177185]
Cyber-physical systems (CPS) are at the intersection of digital technology and engineering domains.
Prominent cybersecurity attacks on CPS have brought attention to the vulnerability of these systems.
This literature review delves into state-of-the-art research in CPS security modelling, encompassing both threat and attack modelling.
arXiv Detail & Related papers (2024-04-11T07:41:36Z) - Simulating Malicious Attacks on VANETs for Connected and Autonomous
Vehicle Cybersecurity: A Machine Learning Dataset [0.4129225533930965]
Connected and Autonomous Vehicles (CAVs) rely on Vehicular Adhoc Networks with wireless communication between vehicles and roadside infrastructure to support safe operation.
cybersecurity attacks pose a threat to VANETs and the safe operation of CAVs.
This study proposes the use of simulation for modelling typical communication scenarios which may be subject to malicious attacks.
arXiv Detail & Related papers (2022-02-15T20:08:58Z) - A Framework for Evaluating the Cybersecurity Risk of Real World, Machine
Learning Production Systems [41.470634460215564]
We develop an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems.
Using the proposed extension, security practitioners can apply attack graph analysis methods in environments that include ML components.
arXiv Detail & Related papers (2021-07-05T05:58:11Z) - Reinforcement Learning for Feedback-Enabled Cyber Resilience [24.92055101652206]
Cyber resilience provides a new security paradigm that complements inadequate protection with resilience mechanisms.
A Cyber-Resilient Mechanism ( CRM) adapts to the known or zero-day threats and uncertainties in real-time.
We review the literature on RL for cyber resiliency and discuss the cyber-resilient defenses against three major types of vulnerabilities.
arXiv Detail & Related papers (2021-07-02T01:08:45Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - A System for Efficiently Hunting for Cyber Threats in Computer Systems
Using Threat Intelligence [78.23170229258162]
We build ThreatRaptor, a system that facilitates cyber threat hunting in computer systems using OSCTI.
ThreatRaptor provides (1) an unsupervised, light-weight, and accurate NLP pipeline that extracts structured threat behaviors from unstructured OSCTI text, (2) a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities, and (3) a query synthesis mechanism that automatically synthesizes a TBQL query from the extracted threat behaviors.
arXiv Detail & Related papers (2021-01-17T19:44:09Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.