Bit-flipping Decoder Failure Rate Estimation for (v,w)-regular Codes
- URL: http://arxiv.org/abs/2401.16919v2
- Date: Wed, 7 Feb 2024 17:08:46 GMT
- Title: Bit-flipping Decoder Failure Rate Estimation for (v,w)-regular Codes
- Authors: Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi,
- Abstract summary: We propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder.
We validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-itcrypteration Decoding Failure Rates (DFR)
- Score: 84.0257274213152
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Providing closed form estimates of the decoding failure rate of iterative decoder for low- and moderate-density parity check codes has attracted significant interest in the research community over the years. This interest has raised recently due to the use of iterative decoders in post-quantum cryptosystems, where the desired decoding failure rates are impossible to estimate via Monte Carlo simulations. In this work, we propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder, which is also employable for cryptographic purposes. In doing so, we successfully tackle the estimation of the bit flipping probabilities at the second decoder iteration, and provide a fitting estimate for the syndrome weight distribution at the first iteration. We numerically validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-iteration Decoding Failure Rates (DFR), both in the floor and waterfall regime for simulatable codes. Finally, we apply our method to estimate the DFR of LEDAcrypt parameters, showing improvements by factors larger than $2^{70}$ (for NIST category $1$) with respect to the previous estimation techniques. This allows for a $\approx 20$% shortening in public key and ciphertext sizes, at no security loss, making the smallest ciphertext for NIST category $1$ only $6$% larger than the one of BIKE. We note that the analyzed two-iterations decoder is applicable in BIKE, where swapping it with the current black-gray decoder (and adjusting the parameters) would provide strong IND-CCA$2$ guarantees.
Related papers
- Reduction from sparse LPN to LPN, Dual Attack 3.0 [1.9106435311144374]
A new algorithm called RLPN-decoding which relies on a completely different approach was introduced.
It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42.
This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography.
arXiv Detail & Related papers (2023-12-01T17:35:29Z) - Fault-Tolerant Quantum Memory using Low-Depth Random Circuit Codes [0.24578723416255752]
Low-depth random circuit codes possess many desirable properties for quantum error correction.
We design a fault-tolerant distillation protocol for preparing encoded states of one-dimensional random circuit codes.
We show through numerical simulations that our protocol can correct erasure errors up to an error rate of $2%$.
arXiv Detail & Related papers (2023-11-29T19:00:00Z) - Testing the Accuracy of Surface Code Decoders [55.616364225463066]
Large-scale, fault-tolerant quantum computations will be enabled by quantum error-correcting codes (QECC)
This work presents the first systematic technique to test the accuracy and effectiveness of different QECC decoding schemes.
arXiv Detail & Related papers (2023-11-21T10:22:08Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - Biased Gottesman-Kitaev-Preskill repetition code [0.0]
Continuous-variable quantum computing architectures based upon the Gottesmann-Kitaev-Preskill (GKP) encoding have emerged as a promising candidate.
We study the code-capacity behaviour of a rectangular-lattice GKP encoding with a repetition code under an isotropic Gaussian displacement channel.
arXiv Detail & Related papers (2022-12-21T22:56:05Z) - Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix
Factorization [60.91600465922932]
We present an approach that avoids the use of a dual-encoder for retrieval, relying solely on the cross-encoder.
Our approach provides test-time recall-vs-computational cost trade-offs superior to the current widely-used methods.
arXiv Detail & Related papers (2022-10-23T00:32:04Z) - Revisiting Code Search in a Two-Stage Paradigm [67.02322603435628]
TOSS is a two-stage fusion code search framework.
It first uses IR-based and bi-encoder models to efficiently recall a small number of top-k code candidates.
It then uses fine-grained cross-encoders for finer ranking.
arXiv Detail & Related papers (2022-08-24T02:34:27Z) - Correcting spanning errors with a fractal code [7.6146285961466]
We propose an efficient decoder for the Fibonacci code'; a two-dimensional classical code that mimics the fractal nature of the cubic code.
We perform numerical experiments that show our decoder is robust to one-dimensional, correlated errors.
arXiv Detail & Related papers (2020-02-26T19:00:06Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z) - Deep Q-learning decoder for depolarizing noise on the toric code [0.0]
We present an AI-based decoding agent for quantum error correction of depolarizing noise on the toric code.
The agent is trained using deep reinforcement learning (DRL), where an artificial neural network encodes the state-action Q-values of error-correcting $X$, $Y$, and $Z$ Pauli operations.
We argue that the DRL-type decoder provides a promising framework for future practical error correction of topological codes.
arXiv Detail & Related papers (2019-12-30T13:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.