Alleviating Attack Data Scarcity: SCANIA's Experience Towards Enhancing In-Vehicle Cyber Security Measures
- URL: http://arxiv.org/abs/2507.02607v2
- Date: Sun, 06 Jul 2025 10:24:30 GMT
- Title: Alleviating Attack Data Scarcity: SCANIA's Experience Towards Enhancing In-Vehicle Cyber Security Measures
- Authors: Frida Sundfeldt, Bianca Widstam, Mahshid Helali Moghadam, Kuo-Yun Liang, Anders Vesterberg,
- Abstract summary: This paper presents a context-aware attack data generator that generates attack inputs and corresponding in-vehicle network log.<n>It utilizes parameterized attack models augmented with CAN message decoding and attack intensity adjustments to configure the attack scenarios.<n>We develop and perform an empirical evaluation of two deep neural network IDS models using the generated data.
- Score: 0.1631115063641726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The digital evolution of connected vehicles and the subsequent security risks emphasize the critical need for implementing in-vehicle cyber security measures such as intrusion detection and response systems. The continuous advancement of attack scenarios further highlights the need for adaptive detection mechanisms that can detect evolving, unknown, and complex threats. The effective use of ML-driven techniques can help address this challenge. However, constraints on implementing diverse attack scenarios on test vehicles due to safety, cost, and ethical considerations result in a scarcity of data representing attack scenarios. This limitation necessitates alternative efficient and effective methods for generating high-quality attack-representing data. This paper presents a context-aware attack data generator that generates attack inputs and corresponding in-vehicle network log, i.e., controller area network (CAN) log, representing various types of attack including denial of service (DoS), fuzzy, spoofing, suspension, and replay attacks. It utilizes parameterized attack models augmented with CAN message decoding and attack intensity adjustments to configure the attack scenarios with high similarity to real-world scenarios and promote variability. We evaluate the practicality of the generated attack-representing data within an intrusion detection system (IDS) case study, in which we develop and perform an empirical evaluation of two deep neural network IDS models using the generated data. In addition to the efficiency and scalability of the approach, the performance results of IDS models, high detection and classification capabilities, validate the consistency and effectiveness of the generated data as well. In this experience study, we also elaborate on the aspects influencing the fidelity of the data to real-world scenarios and provide insights into its application.
Related papers
- Preliminary Investigation into Uncertainty-Aware Attack Stage Classification [81.28215542218724]
This work addresses the problem of attack stage inference under uncertainty.<n>We propose a classification approach based on Evidential Deep Learning (EDL), which models predictive uncertainty by outputting parameters of a Dirichlet distribution over possible stages.<n>Preliminary experiments in a simulated environment demonstrate that the proposed model can accurately infer the stage of an attack with confidence.
arXiv Detail & Related papers (2025-08-01T06:58:00Z) - A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - Toward Realistic Adversarial Attacks in IDS: A Novel Feasibility Metric for Transferability [0.0]
Transferability-based adversarial attacks exploit the ability of adversarial examples to deceive a specific source Intrusion Detection System (IDS) model.<n>These attacks exploit common vulnerabilities in machine learning models to bypass security measures and compromise systems.<n>This paper analyzes the core factors that contribute to transferability, including feature alignment, model architectural similarity, and overlap in the data distributions that each IDS examines.
arXiv Detail & Related papers (2025-04-11T12:15:03Z) - Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.<n>By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.<n> Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - Simulation of Multi-Stage Attack and Defense Mechanisms in Smart Grids [2.0766068042442174]
We introduce a simulation environment that replicates the power grid's infrastructure and communication dynamics.<n>The framework generates diverse, realistic attack data to train machine learning algorithms for detecting and mitigating cyber threats.<n>It also provides a controlled, flexible platform to evaluate emerging security technologies, including advanced decision support systems.
arXiv Detail & Related papers (2024-12-09T07:07:17Z) - A Practical Trigger-Free Backdoor Attack on Neural Networks [33.426207982772226]
We propose a trigger-free backdoor attack that does not require access to any training data.
Specifically, we design a novel fine-tuning approach that incorporates the concept of malicious data into the concept of the attacker-specified class.
The effectiveness, practicality, and stealthiness of the proposed attack are evaluated on three real-world datasets.
arXiv Detail & Related papers (2024-08-21T08:53:36Z) - Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis [2.479074862022315]
This study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid.
It uses attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions.
arXiv Detail & Related papers (2023-12-21T09:54:18Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Adaptive Attack Detection in Text Classification: Leveraging Space Exploration Features for Text Sentiment Classification [44.99833362998488]
Adversarial example detection plays a vital role in adaptive cyber defense, especially in the face of rapidly evolving attacks.
We propose a novel approach that leverages the power of BERT (Bidirectional Representations from Transformers) and introduces the concept of Space Exploration Features.
arXiv Detail & Related papers (2023-08-29T23:02:26Z) - Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving [18.72382517467458]
We propose a novel adversarial backdoor attack against trajectory prediction models.
Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach.
We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models.
arXiv Detail & Related papers (2023-06-27T19:15:06Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Deep Learning based Covert Attack Identification for Industrial Control
Systems [5.299113288020827]
We develop a data-driven framework that can be used to detect, diagnose, and localize a type of cyberattack called covert attacks on smart grids.
The framework has a hybrid design that combines an autoencoder, a recurrent neural network (RNN) with a Long-Short-Term-Memory layer, and a Deep Neural Network (DNN)
arXiv Detail & Related papers (2020-09-25T17:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.