Evaluating False Alarm and Missing Attacks in CAN IDS
- URL: http://arxiv.org/abs/2602.02781v1
- Date: Mon, 02 Feb 2026 20:38:01 GMT
- Title: Evaluating False Alarm and Missing Attacks in CAN IDS
- Authors: Nirab Hossain, Pablo Moriano,
- Abstract summary: We present a systematic adversarial evaluation of CAN IDS using the ROAD dataset.<n>We compare four shallow learning models with a deep neural network-based detector.<n>Our results demonstrate that adversarial manipulation can simultaneously trigger false alarms and evade detection.
- Score: 0.7734726150561088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern vehicles rely on electronic control units (ECUs) interconnected through the Controller Area Network (CAN), making in-vehicle communication a critical security concern. Machine learning (ML)-based intrusion detection systems (IDS) are increasingly deployed to protect CAN traffic, yet their robustness against adversarial manipulation remains largely unexplored. We present a systematic adversarial evaluation of CAN IDS using the ROAD dataset, comparing four shallow learning models with a deep neural network-based detector. Using protocol-compliant, payload-level perturbations generated via FGSM, BIM and PGD, we evaluate adversarial effects on both benign and malicious CAN frames. While all models achieve strong baseline performance under benign conditions, adversarial perturbations reveal substantial vulnerabilities. Although shallow and deep models are robust to false-alarm induction, with the deep neural network (DNN) performing best on benign traffic, all architectures suffer significant increases in missed attacks. Notably, under gradient-based attacks, the shallow model extra trees (ET) demonstrates improved robustness to missed-attack induction compared to the other models. Our results demonstrate that adversarial manipulation can simultaneously trigger false alarms and evade detection, underscoring the need for adversarial robustness evaluation in safety-critical automotive IDS.
Related papers
- Explainable and Resilient ML-Based Physical-Layer Attack Detectors [46.30085297768888]
We analyze the inner workings of various classifiers trained to alert about physical layer intrusions.<n>We evaluate the detectors' resilience to malicious parameter noising.<n>This work serves as a design guideline for developing fast and robust detectors trained on available network monitoring data.
arXiv Detail & Related papers (2025-09-30T17:05:33Z) - Adversarial Attacks on Deep Learning-Based False Data Injection Detection in Differential Relays [3.4061238650474666]
This paper demonstrates that adversarial attacks, carefully crafted FDIAs, can evade existing Deep Learning-based Schemes (DLSs) used for False Data Injection Attacks (FDIAs) in smart grids.<n>We propose a novel adversarial attack framework, utilizing the Fast Gradient Sign Method, which exploits DLS vulnerabilities.<n>Our results highlight the significant threat posed by adversarial attacks to DLS-based FDIA detection, underscore the necessity for robust cybersecurity measures in smart grids, and demonstrate the effectiveness of adversarial training in enhancing model robustness against adversarial FDIAs.
arXiv Detail & Related papers (2025-06-24T04:22:26Z) - Robust Anomaly Detection in Network Traffic: Evaluating Machine Learning Models on CICIDS2017 [0.0]
We present a comparison of four representative models on the CICIDS 2017 dataset.<n>Supervised and CNN achieve near-perfect accuracy on familiar attacks but suffer drastic recall drops on novel attacks.<n>Unsupervised LOF attains moderate overall accuracy and high recall on unknown threats at the cost of elevated false alarms.
arXiv Detail & Related papers (2025-06-23T15:31:10Z) - Assessing the Resilience of Automotive Intrusion Detection Systems to Adversarial Manipulation [6.349764856675644]
Adversarial attacks, particularly evasion attacks, can manipulate inputs to bypass detection by IDSs.<n>We consider three scenarios: white-box (attacker with full system knowledge), grey-box (partial system knowledge), and the more realistic black-box.<n>We evaluate the effectiveness of the proposed attacks against state-of-the-art IDSs on two publicly available datasets.
arXiv Detail & Related papers (2025-06-12T12:06:05Z) - Evaluating the Adversarial Robustness of Detection Transformers [4.3012765978447565]
Despite the advancements in object detection transformers (DETRs), their robustness against adversarial attacks remains underexplored.<n>This paper presents a comprehensive evaluation of DETR model and its variants under both white-box and black-box adversarial attacks.<n>Our analysis reveals high intra-network transferability among DETR variants, but limited cross-network transferability to CNN-based models.
arXiv Detail & Related papers (2024-12-25T00:31:10Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.