Explainable and Resilient ML-Based Physical-Layer Attack Detectors
- URL: http://arxiv.org/abs/2509.26530v1
- Date: Tue, 30 Sep 2025 17:05:33 GMT
- Title: Explainable and Resilient ML-Based Physical-Layer Attack Detectors
- Authors: Aleksandra KnapiĊska, Marija Furdek,
- Abstract summary: We analyze the inner workings of various classifiers trained to alert about physical layer intrusions.<n>We evaluate the detectors' resilience to malicious parameter noising.<n>This work serves as a design guideline for developing fast and robust detectors trained on available network monitoring data.
- Score: 46.30085297768888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detection of emerging attacks on network infrastructure is a critical aspect of security management. To meet the growing scale and complexity of modern threats, machine learning (ML) techniques offer valuable tools for automating the detection of malicious activities. However, as these techniques become more complex, their internal operations grow increasingly opaque. In this context, we address the need for explainable physical-layer attack detection methods. First, we analyze the inner workings of various classifiers trained to alert about physical layer intrusions, examining how the influence of different monitored parameters varies depending on the type of attack being detected. This analysis not only improves the interpretability of the models but also suggests ways to enhance their design for increased speed. In the second part, we evaluate the detectors' resilience to malicious parameter noising. The results highlight a key trade-off between model speed and resilience. This work serves as a design guideline for developing fast and robust detectors trained on available network monitoring data.
Related papers
- Multi-Agent Collaborative Intrusion Detection for Low-Altitude Economy IoT: An LLM-Enhanced Agentic AI Framework [60.72591149679355]
The rapid expansion of low-altitude economy Internet of Things (LAE-IoT) networks has created unprecedented security challenges.<n>Traditional intrusion detection systems fail to tackle the unique characteristics of aerial IoT environments.<n>We introduce a large language model (LLM)-enabled agentic AI framework for enhancing intrusion detection in LAE-IoT networks.
arXiv Detail & Related papers (2026-01-25T12:47:25Z) - Techniques of Modern Attacks [51.56484100374058]
Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets.<n>I will investigate both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research.<n>I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies.
arXiv Detail & Related papers (2026-01-19T22:15:25Z) - MirGuard: Towards a Robust Provenance-based Intrusion Detection System Against Graph Manipulation Attacks [13.92935628832727]
MirGuard is an anomaly detection framework that combines logic-aware multi-view augmentation with contrastive representation learning.<n>MirGuard significantly outperforms state-of-the-art detectors in robustness against various graph manipulation attacks.
arXiv Detail & Related papers (2025-08-14T13:35:51Z) - Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems [5.787505062263962]
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security.<n>This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools.
arXiv Detail & Related papers (2025-08-03T17:02:05Z) - Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Feature Selection via GANs (GANFS): Enhancing Machine Learning Models for DDoS Mitigation [0.0]
We introduce a novel Generative Adversarial Network-based Feature Selection (GANFS) method for detecting Distributed Denial of Service (DDoS) attacks.<n>By training a GAN exclusively on attack traffic, GANFS effectively ranks feature importance without relying on full supervision.<n>Results point to the potential of integrating generative learning models into cybersecurity pipelines to build more adaptive and scalable detection systems.
arXiv Detail & Related papers (2025-04-21T20:27:33Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.