Large-Scale FPGA-Based Privacy Amplification Exceeding $10^8$ Bits for Quantum Key Distribution
- URL: http://arxiv.org/abs/2503.09331v1
- Date: Wed, 12 Mar 2025 12:25:13 GMT
- Title: Large-Scale FPGA-Based Privacy Amplification Exceeding $10^8$ Bits for Quantum Key Distribution
- Authors: Xi Cheng, Hao-kun Mao, Hong-wei Xu, Qiong Li,
- Abstract summary: Privacy Amplification (PA) is indispensable in Quantum Key Distribution (QKD) post-processing.<n>Due to limited resources, input and output sizes remain the primary bottleneck in FPGA-based PA schemes.<n>We present a large-scale FPGA-based PA scheme that supports both input block sizes and output key sizes exceeding $108$ bits.
- Score: 7.547771404171612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy Amplification (PA) is indispensable in Quantum Key Distribution (QKD) post-processing, as it eliminates information leakage to eavesdroppers. Field-programmable gate arrays (FPGAs) are highly attractive for QKD systems due to their flexibility and high integration. However, due to limited resources, input and output sizes remain the primary bottleneck in FPGA-based PA schemes for Discrete Variable (DV)-QKD systems. In this paper, we present a large-scale FPGA-based PA scheme that supports both input block sizes and output key sizes exceeding $10^8$ bits, effectively addressing the challenges posed by the finite-size effect. To accommodate the large input and output sizes, we propose a novel PA algorithm and prove its security. We implement and evaluate this scheme on a Xilinx XCKU095 FPGA platform. Experimental results demonstrate that our PA implementation can handle an input block size of $10^8$ bits with flexible output sizes up to the input size. For DV-QKD systems, our PA scheme supports an input block size nearly two orders of magnitude larger than current FPGA-based PA schemes, significantly mitigating the impact of the finite-size effect on the final secure key rate.
Related papers
- Joint Transmit and Pinching Beamforming for PASS: Optimization-Based or Learning-Based? [89.05848771674773]
A novel antenna system ()-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed.<n>It consists of multiple waveguides, which equip numerous low-cost antennas, named (PAs)<n>The positions of PAs can be reconfigured to both spanning large-scale path and space.
arXiv Detail & Related papers (2025-02-12T18:54:10Z) - AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer [54.713778961605115]
Vision Transformer (ViT) has become one of the most prevailing fundamental backbone networks in the computer vision community.
We propose a novel non-uniform quantizer, dubbed the Adaptive Logarithm AdaLog (AdaLog) quantizer.
arXiv Detail & Related papers (2024-07-17T18:38:48Z) - Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA [20.629635991749808]
This paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs.
At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads.
At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient BayesNNs.
arXiv Detail & Related papers (2024-06-20T17:08:42Z) - SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs [3.302913401404089]
Sliding window-based static sparse attention mitigates the problem by limiting the attention scope of the input tokens.
We propose a dataflow-aware FPGA-based accelerator design, SWAT, that efficiently leverages the sparsity to achieve scalable performance for long input.
arXiv Detail & Related papers (2024-05-27T10:25:08Z) - Accurate Block Quantization in LLMs with Outliers [0.6138671548064355]
The demand for inference on extremely large scale LLMs has seen enormous growth in recent months.
The problem is aggravated by the exploding raise in the lengths of the sequences being processed.
Various quantization techniques have been proposed that allow accurate quantization for both weights and activations.
arXiv Detail & Related papers (2024-03-29T12:15:06Z) - Broadband parametric amplification in DARTWARS [64.98268713737]
Traveling-Wave Parametric Amplifiers (TWPAs) may be especially suitable for practical applications due to their multi-Gigahertz amplification bandwidth.
The DARTWARS project aims to develop a KITWPA capable of achieving $20,$ dB of amplification.
The measurements revealed an average amplification of approximately $9,$dB across a $2,$GHz bandwidth for a KITWPA spanning $17,$mm in length.
arXiv Detail & Related papers (2024-02-19T10:57:37Z) - Deterministic identification over channels with finite output: a dimensional perspective on superlinear rates [49.126395046088014]
We consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets.<n>Our main findings are that the maximum length of messages thus identifiable scales superlinearly as $R,nlog n$ with the block length $n$.<n>We show that it is sufficient to ensure pairwise reliable distinguishability of the output distributions to construct a DI code.
arXiv Detail & Related papers (2024-02-14T11:59:30Z) - A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance [49.1574468325115]
accumulator-aware quantization (A2Q) is a novel weight quantization method designed to train quantized neural networks (QNNs) to avoid overflow during inference.
A2Q introduces a unique formulation inspired by weight normalization that constrains the L1-norm of model weights according to accumulator bit width bounds.
We show A2Q can train QNNs for low-precision accumulators while maintaining model accuracy competitive with a floating-point baseline.
arXiv Detail & Related papers (2023-08-25T17:28:58Z) - End-to-end resource analysis for quantum interior point methods and portfolio optimization [63.4863637315163]
We provide a complete quantum circuit-level description of the algorithm from problem input to problem output.
We report the number of logical qubits and the quantity/depth of non-Clifford T-gates needed to run the algorithm.
arXiv Detail & Related papers (2022-11-22T18:54:48Z) - Large-scale and High-speed Privacy Amplification for FPGA-based Quantum
Key Distribution [0.0]
FPGA-based Quantum key distribution (QKD) system is an important trend of QKD systems.
This paper designs a new PA scheme for FPGA-based QKD with multilinear modular hash-modular arithmetic hash (MMH-MH) PA and number theoretical transform (NTT) algorithm.
arXiv Detail & Related papers (2021-07-02T12:35:55Z) - An efficient hybrid hash based privacy amplification algorithm for
quantum key distribution [0.0]
A novel privacy amplification algorithm is proposed in this paper.
It is implemented on a mobile CPU platform instead of a desktop CPU or a server CPU.
arXiv Detail & Related papers (2021-05-28T08:57:06Z) - Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization
Framework [39.981546951333556]
This paper focuses on weight quantization, a hardware-friendly model compression approach.
It is motivated by (1) the distribution of the weights in the different rows are not the same; and (2) the potential of achieving better utilization of FPGA hardware resources.
arXiv Detail & Related papers (2020-12-08T06:25:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.