ACTISM: Threat-informed Dynamic Security Modelling for Automotive Systems
- URL: http://arxiv.org/abs/2412.00416v3
- Date: Wed, 05 Feb 2025 11:44:10 GMT
- Title: ACTISM: Threat-informed Dynamic Security Modelling for Automotive Systems
- Authors: Shaofei Huang, Christopher M. Poskitt, Lwin Khin Shar,
- Abstract summary: ACTISM (Automotive Consequence-Driven and Threat-Informed Security Modelling) is an integrated security modelling framework.<n>It enhances the resilience of automotive systems by dynamically updating their cybersecurity posture.<n>We demonstrate the effectiveness of ACTISM by applying it to a real-world example of the Tesla Electric Vehicle's In-Vehicle Infotainment system.<n>We report the results of a practitioners' survey on the usefulness of ACTISM and its future directions.
- Score: 7.3347982474177185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolving cybersecurity threats in complex cyber-physical systems pose significant risks to system functionality and safety. This experience report introduces ACTISM (Automotive Consequence-Driven and Threat-Informed Security Modelling), an integrated security modelling framework that enhances the resilience of automotive systems by dynamically updating their cybersecurity posture in response to prevailing and evolving threats, attacker tactics, and their impact on system functionality and safety. ACTISM addresses the existing knowledge gap in static security assessment methodologies by providing a dynamic and iterative framework. We demonstrate the effectiveness of ACTISM by applying it to a real-world example of the Tesla Electric Vehicle's In-Vehicle Infotainment system, illustrating how the security model can be adapted as new threats emerge. We also report the results of a practitioners' survey on the usefulness of ACTISM and its future directions. The survey highlights avenues for future research and development in this area, including automated vulnerability management workflows for automotive systems.
Related papers
- LLM-Assisted Proactive Threat Intelligence for Automated Reasoning [2.0427650128177]
This research presents a novel approach to enhance real-time cybersecurity threat detection and response.
We integrate large language models (LLMs) and Retrieval-Augmented Generation (RAG) systems with continuous threat intelligence feeds.
arXiv Detail & Related papers (2025-04-01T05:19:33Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.
This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.
We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - Safety at Scale: A Comprehensive Survey of Large Model Safety [298.05093528230753]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection [4.169915659794567]
This research introduces "Dynamically Retrainable Firewalls"
Unlike traditional firewalls that rely on static rules to inspect traffic, these advanced systems leverage machine learning algorithms to analyze network traffic pattern dynamically and identify threats.
It also discusses strategies to improve performance, reduce latency, optimize resource utilization, and address integration issues with present-day concepts such as Zero Trust and mixed environments.
arXiv Detail & Related papers (2025-01-14T00:04:35Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation [0.3413711585591077]
As generative AI systems, including large language models (LLMs) and diffusion models, advance rapidly, their growing adoption has led to new and complex security risks.
This paper introduces a novel formal framework for categorizing and mitigating these emergent security risks.
We identify previously under-explored risks, including latent space exploitation, multi-modal cross-attack vectors, and feedback-loop-induced model degradation.
arXiv Detail & Related papers (2024-10-15T02:51:32Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - REACT: Autonomous Intrusion Response System for Intelligent Vehicles [1.5862483908050367]
This paper proposes a dynamic intrusion response system integrated within the vehicle.
The system offers a comprehensive list of potential responses, a methodology for response evaluation, and various response selection methods.
The evaluation highlights the system's adaptability, its ability to respond swiftly, its minimal memory footprint, and its capacity for dynamic system parameter adjustments.
arXiv Detail & Related papers (2024-01-09T19:34:59Z) - A Cybersecurity Risk Analysis Framework for Systems with Artificial
Intelligence Components [0.0]
The introduction of the European Union Artificial Intelligence Act, the NIST Artificial Intelligence Risk Management Framework, and related norms demands a better understanding and implementation of novel risk analysis approaches to evaluate systems with Artificial Intelligence components.
This paper provides a cybersecurity risk analysis framework that can help assessing such systems.
arXiv Detail & Related papers (2024-01-03T09:06:39Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - ANALYSE -- Learning to Attack Cyber-Physical Energy Systems With
Intelligent Agents [0.0]
ANALYSE is a machine-learning-based software suite to let learning agents autonomously find attacks in cyber-physical energy systems.
It is designed to find yet unknown attack types and to reproduce many known attack strategies in cyber-physical energy systems from the scientific literature.
arXiv Detail & Related papers (2023-04-21T11:36:18Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Safety Analysis of Autonomous Driving Systems Based on Model Learning [16.38592243376647]
We present a practical verification method for safety analysis of the autonomous driving system (ADS)
The main idea is to build a surrogate model that quantitatively depicts the behaviour of an ADS in the specified traffic scenario.
We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS in literature.
arXiv Detail & Related papers (2022-11-23T06:52:40Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.