Experimental vulnerability analysis of QKD based on attack ratings
- URL: http://arxiv.org/abs/2010.07815v2
- Date: Fri, 18 Dec 2020 16:26:14 GMT
- Title: Experimental vulnerability analysis of QKD based on attack ratings
- Authors: Rupesh Kumar, Francesco Mazzoncini, Hao Qin and Romain All\'eaume
- Abstract summary: We consider the use of attack ratings in the context of QKD security evaluation.
We conduct an experimental vulnerability assessment of CV-QKD against saturation attacks.
- Score: 0.8902959815221527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the methodology used for classical cryptographic hardware, we
consider the use of attack ratings in the context of QKD security evaluation.
To illustrate the relevance of this approach, we conduct an experimental
vulnerability assessment of CV-QKD against saturation attacks, for two
different attack strategies. The first strategy relies on inducing detector
saturation by performing a large coherent displacement. This strategy is
experimentally challenging and therefore translates into a high attack rating.
We also propose and experimentally demonstrate a second attack strategy that
simply consists in saturating the detector with an external laser. The low
rating we obtain indicates that this attack constitutes a primary threat for
practical CV-QKD systems. These results highlight the benefits of combining
theoretical security considerations with vulnerability analysis based on attack
ratings, in order to guide the design and engineering of practical QKD systems
towards the highest possible security standards.
Related papers
- Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification [8.187445866881637]
We propose a minimal power white box adversarial attack for Deep Learning based Automatic Modulation Classification (AMC)
We evaluate the efficacy of the proposed method by comparing it with existing adversarial attack approaches.
Experimental results demonstrate that the proposed attack is powerful, requires minimal power, and can be generated in less time.
arXiv Detail & Related papers (2024-09-17T17:17:54Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Mitigation of Channel Tampering Attacks in Continuous-Variable Quantum Key Distribution [8.840486611542584]
In CV-QKD, vulnerability to communication disruption persists from potential adversaries employing Denial-of-Service (DoS) attacks.
Inspired by DoS attacks, this paper introduces a novel threat in CV-QKD called the Channel Amplification (CA) attack.
To counter this threat, we propose a detection and mitigation strategy.
arXiv Detail & Related papers (2024-01-29T05:48:51Z) - Empirical Risk-aware Machine Learning on Trojan-Horse Detection for Trusted Quantum Key Distribution Networks [31.857236131842843]
Quantum key distribution (QKD) is a cryptographic technique that offers high levels of data security during transmission.
The existence of a gap between theoretical concepts and practical implementation has raised concerns about the trustworthiness of QKD networks.
We propose the implementation of risk-aware machine learning techniques that present risk analysis for Trojan-horse attacks over the time-variant quantum channel.
arXiv Detail & Related papers (2024-01-26T03:36:13Z) - Confidence-driven Sampling for Backdoor Attacks [49.72680157684523]
Backdoor attacks aim to surreptitiously insert malicious triggers into DNN models, granting unauthorized control during testing scenarios.
Existing methods lack robustness against defense strategies and predominantly focus on enhancing trigger stealthiness while randomly selecting poisoned samples.
We introduce a straightforward yet highly effective sampling methodology that leverages confidence scores. Specifically, it selects samples with lower confidence scores, significantly increasing the challenge for defenders in identifying and countering these attacks.
arXiv Detail & Related papers (2023-10-08T18:57:36Z) - Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Deep-Attack over the Deep Reinforcement Learning [26.272161868927004]
adversarial attack developments have made reinforcement learning more vulnerable.
We propose a reinforcement learning-based attacking framework by considering the effectiveness and stealthy spontaneously.
We also propose a new metric to evaluate the performance of the attack model in these two aspects.
arXiv Detail & Related papers (2022-05-02T10:58:19Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.