Mitigation of Channel Tampering Attacks in Continuous-Variable Quantum Key Distribution
- URL: http://arxiv.org/abs/2401.15898v2
- Date: Wed, 12 Jun 2024 04:02:55 GMT
- Title: Mitigation of Channel Tampering Attacks in Continuous-Variable Quantum Key Distribution
- Authors: Sebastian P. Kish, Chandra Thapa, Mikhael Sayat, Hajime Suzuki, Josef Pieprzyk, Seyit Camtepe,
- Abstract summary: In CV-QKD, vulnerability to communication disruption persists from potential adversaries employing Denial-of-Service (DoS) attacks.
Inspired by DoS attacks, this paper introduces a novel threat in CV-QKD called the Channel Amplification (CA) attack.
To counter this threat, we propose a detection and mitigation strategy.
- Score: 8.840486611542584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant advancements in continuous-variable quantum key distribution (CV-QKD), practical CV-QKD systems can be compromised by various attacks. Consequently, identifying new attack vectors and countermeasures for CV-QKD implementations is important for the continued robustness of CV-QKD. In particular, as CV-QKD relies on a public quantum channel, vulnerability to communication disruption persists from potential adversaries employing Denial-of-Service (DoS) attacks. Inspired by DoS attacks, this paper introduces a novel threat in CV-QKD called the Channel Amplification (CA) attack, wherein Eve manipulates the communication channel through amplification. We specifically model this attack in a CV-QKD optical fiber setup. To counter this threat, we propose a detection and mitigation strategy. Detection involves a machine learning (ML) model based on a decision tree classifier, classifying various channel tampering attacks, including CA and DoS attacks. For mitigation, Bob, post-selects quadrature data by classifying the attack type and frequency. Our ML model exhibits high accuracy in distinguishing and categorizing these attacks. The CA attack's impact on the secret key rate (SKR) is explored concerning Eve's location and the relative intensity noise of the local oscillator (LO). The proposed mitigation strategy improves the attacked SKR for CA attacks and, in some cases, for hybrid CA-DoS attacks. Our study marks a novel application of both ML classification and post-selection in this context. These findings are important for enhancing the robustness of CV-QKD systems against emerging threats on the channel.
Related papers
- Composable free-space continuous-variable quantum key distribution using discrete modulation [3.864405940022529]
Continuous-variable (CV) quantum key distribution (QKD) allows for quantum secure communication.
We present a CV QKD system using discrete modulation that is especially designed for urban atmospheric channels.
This will allow to expand CV QKD networks beyond the existing fiber backbone.
arXiv Detail & Related papers (2024-10-16T18:02:53Z) - Deep-learning-based continuous attacks on quantum key distribution protocols [0.0]
We design a new attack scheme that exploits continuous measurement together with the powerful pattern recognition capacities of deep recurrent neural networks.
We show that, when applied to the BB84 protocol, our attack can be difficult to notice while still allowing the spy to extract significant information about the states of the qubits sent in the quantum communication channel.
arXiv Detail & Related papers (2024-08-22T17:39:26Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - State-Blocking Side-Channel Attacks and Autonomous Fault Detection in Quantum Key Distribution [0.0]
Side-channel attacks allow an Eavesdropper to use insecurities in the practical implementation of QKD systems.
We discuss a scheme to autonomously detect such an attack during an ongoing QKD session.
We present how Alice and Bob can put in place a countermeasure to continue use of the QKD system, once a detection is made.
arXiv Detail & Related papers (2023-05-29T10:43:57Z) - Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Experimental vulnerability analysis of QKD based on attack ratings [0.8902959815221527]
We consider the use of attack ratings in the context of QKD security evaluation.
We conduct an experimental vulnerability assessment of CV-QKD against saturation attacks.
arXiv Detail & Related papers (2020-10-15T15:08:31Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.