Deep Anomaly Detection for Active Attacks on the Receiver in Quantum Key Distribution
- URL: http://arxiv.org/abs/2508.12749v2
- Date: Thu, 16 Oct 2025 15:34:29 GMT
- Title: Deep Anomaly Detection for Active Attacks on the Receiver in Quantum Key Distribution
- Authors: Junxuan Liu, Bingcheng Huang, Jialei Su, Qingquan Peng, Anqi Huang,
- Abstract summary: We propose an anomaly detection model based on one-class machine learning to address active attacks targeting the receiver.<n>Compared to traditional approaches, our model can be deployed with minimal cost in existing QKD networks without requiring additional optical or electrical components.<n>Unlike multi-class machine learning algorithms, our approach does not rely on prior knowledge of specific attack types and is potentially able to detect unknown active attacks.
- Score: 1.196987515934005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional countermeasures against attacks targeting the receiver in quantum key distribution (QKD) systems often suffer from poor compatibility with deployed infrastructure, the risk of introducing new vulnerabilities, and limited applicability to specific types of active attacks. In this work, we propose an anomaly detection (AD) model based on one-class machine learning to address active attacks targeting the receiver. By constructing a dataset from the QKD system's operational states, the AD model learns the characteristics of normal behavior under secure conditions. When an active attack occurs, the system's state deviates from the learned normal patterns and is identified as anomalous by the model. Experimental results show that the AD model achieves an area under the curve (AUC) exceeding 99%, effectively safeguarding the receiver of the QKD system. Compared to traditional approaches, our model can be deployed with minimal cost in existing QKD networks without requiring additional optical or electrical components, thus avoiding the introduction of new side channels. Furthermore, unlike multi-class machine learning algorithms, our approach does not rely on prior knowledge of specific attack types and is potentially able to detect unknown active attacks. These advantages-generality, ease of deployment, low cost, and high accuracy-make our model a practical and effective tool for protecting the receiver of QKD systems against active attacks.
Related papers
- Cybersecurity of Quantum Key Distribution Implementations [3.1498833540989413]
We present new analysis tools and methodologies for quantum cybersecurity.<n>We adapt the concepts of vulnerabilities, attack surfaces, and exploits from classical cybersecurity to QKD implementation attacks.<n>This work begins to bridge the gap between current analysis methods for experimental attacks on QKD implementations and the decades-long research in the field of classical cybersecurity.
arXiv Detail & Related papers (2025-08-06T17:37:04Z) - Preliminary Investigation into Uncertainty-Aware Attack Stage Classification [81.28215542218724]
This work addresses the problem of attack stage inference under uncertainty.<n>We propose a classification approach based on Evidential Deep Learning (EDL), which models predictive uncertainty by outputting parameters of a Dirichlet distribution over possible stages.<n>Preliminary experiments in a simulated environment demonstrate that the proposed model can accurately infer the stage of an attack with confidence.
arXiv Detail & Related papers (2025-08-01T06:58:00Z) - Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security [32.73124984242397]
Quantum Machine Learning (QML) systems inherit vulnerabilities from classical machine learning.<n>We present a detailed taxonomy of QML attack vectors mapped to corresponding stages in a quantum-aware kill chain framework.<n>This work provides a foundation for more realistic threat modeling and proactive security-in-depth design in the emerging field of quantum machine learning.
arXiv Detail & Related papers (2025-07-11T14:25:36Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Intelligent Attacks and Defense Methods in Federated Learning-enabled Energy-Efficient Wireless Networks [16.816730878868373]
Federated learning (FL) is a promising technique for learning-based functions in wireless networks.<n>FL may increase the risk of exposure to malicious attacks where attacks on a local model may spread to other models.<n>It is critical to evaluate the effect of malicious attacks and develop advanced defense techniques for FL-enabled wireless networks.
arXiv Detail & Related papers (2025-04-25T17:40:35Z) - Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.<n>By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.<n> Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - Deep-learning-based continuous attacks on quantum key distribution protocols [0.0]
In this paper, we design a new individual attack scheme that exploits continuous measurement together with the powerful pattern recognition capacities of deep recurrent neural networks.<n>Our attack increases only slightly the Quantum Bit Error Rate (QBER) of a noisy channel and allows the spy to infer a significant part of the sifted key.
arXiv Detail & Related papers (2024-08-22T17:39:26Z) - Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Autonomous Recognition of Erroneous Raw Key Bit Bias in Quantum Key Distribution [0.0]
A type of error that can occur with regard to the ratio of bit values in the raw key is presented.<n>A mechanism by which errors of this type can be autonomously recognised is given.<n>A two part countermeasure that can be put in place to mitigate against errors of this type is also given.
arXiv Detail & Related papers (2023-05-29T10:43:57Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.