ACTISM: Threat-informed Dynamic Security Modelling for Automotive Systems
- URL: http://arxiv.org/abs/2412.00416v3
- Date: Wed, 05 Feb 2025 11:44:10 GMT
- Title: ACTISM: Threat-informed Dynamic Security Modelling for Automotive Systems
- Authors: Shaofei Huang, Christopher M. Poskitt, Lwin Khin Shar,
- Abstract summary: ACTISM (Automotive Consequence-Driven and Threat-Informed Security Modelling) is an integrated security modelling framework.
It enhances the resilience of automotive systems by dynamically updating their cybersecurity posture.
We demonstrate the effectiveness of ACTISM by applying it to a real-world example of the Tesla Electric Vehicle's In-Vehicle Infotainment system.
We report the results of a practitioners' survey on the usefulness of ACTISM and its future directions.
- Score: 7.3347982474177185
- License:
- Abstract: Evolving cybersecurity threats in complex cyber-physical systems pose significant risks to system functionality and safety. This experience report introduces ACTISM (Automotive Consequence-Driven and Threat-Informed Security Modelling), an integrated security modelling framework that enhances the resilience of automotive systems by dynamically updating their cybersecurity posture in response to prevailing and evolving threats, attacker tactics, and their impact on system functionality and safety. ACTISM addresses the existing knowledge gap in static security assessment methodologies by providing a dynamic and iterative framework. We demonstrate the effectiveness of ACTISM by applying it to a real-world example of the Tesla Electric Vehicle's In-Vehicle Infotainment system, illustrating how the security model can be adapted as new threats emerge. We also report the results of a practitioners' survey on the usefulness of ACTISM and its future directions. The survey highlights avenues for future research and development in this area, including automated vulnerability management workflows for automotive systems.
Related papers
- Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection [4.169915659794567]
This research introduces "Dynamically Retrainable Firewalls"
Unlike traditional firewalls that rely on static rules to inspect traffic, these advanced systems leverage machine learning algorithms to analyze network traffic pattern dynamically and identify threats.
It also discusses strategies to improve performance, reduce latency, optimize resource utilization, and address integration issues with present-day concepts such as Zero Trust and mixed environments.
arXiv Detail & Related papers (2025-01-14T00:04:35Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Siren -- Advancing Cybersecurity through Deception and Adaptive Analysis [0.0]
This project employs sophisticated methods to lure potential threats into controlled environments.
The architectural framework includes a link monitoring proxy, a purpose-built machine learning model for dynamic link analysis.
The incorporation of simulated user activity extends the system's capacity to capture and learn from potential attackers.
arXiv Detail & Related papers (2024-06-10T12:47:49Z) - REACT: Autonomous Intrusion Response System for Intelligent Vehicles [1.5862483908050367]
This paper proposes a dynamic intrusion response system integrated within the vehicle.
The system offers a comprehensive list of potential responses, a methodology for response evaluation, and various response selection methods.
The evaluation highlights the system's adaptability, its ability to respond swiftly, its minimal memory footprint, and its capacity for dynamic system parameter adjustments.
arXiv Detail & Related papers (2024-01-09T19:34:59Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - ANALYSE -- Learning to Attack Cyber-Physical Energy Systems With
Intelligent Agents [0.0]
ANALYSE is a machine-learning-based software suite to let learning agents autonomously find attacks in cyber-physical energy systems.
It is designed to find yet unknown attack types and to reproduce many known attack strategies in cyber-physical energy systems from the scientific literature.
arXiv Detail & Related papers (2023-04-21T11:36:18Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.