Efficient Transformer-based Decoder for Varshamov-Tenengolts Codes
- URL: http://arxiv.org/abs/2502.21060v1
- Date: Fri, 28 Feb 2025 13:59:14 GMT
- Title: Efficient Transformer-based Decoder for Varshamov-Tenengolts Codes
- Authors: Yali Wei, Alan J. X. Guo, Zihui Yan, Yufan Dai,
- Abstract summary: Varshamov-Tenengolts (VT) codes, primarily designed for single-error correction, have emerged as a central research focus.<n>While existing decoding methods achieve high accuracy in correcting a single error, they often fail to correct multiple IDS errors.<n>In this work, we observe that VT codes retain some capability for addressing multiple errors by introducing a transformer-based VT decoder.
- Score: 1.53119329713143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the rise of DNA data storage technology has brought significant attention to the challenge of correcting insertion, deletion, and substitution (IDS) errors. Among various coding methods for IDS correction, Varshamov-Tenengolts (VT) codes, primarily designed for single-error correction, have emerged as a central research focus. While existing decoding methods achieve high accuracy in correcting a single error, they often fail to correct multiple IDS errors. In this work, we observe that VT codes retain some capability for addressing multiple errors by introducing a transformer-based VT decoder (TVTD) along with symbol- and statistic-based codeword embedding. Experimental results demonstrate that the proposed TVTD achieves perfect correction of a single error. Furthermore, when decoding multiple errors across various codeword lengths, the bit error rate and frame error rate are significantly improved compared to existing hard decision and soft-in soft-out algorithms. Additionally, through model architecture optimization, the proposed method reduces time consumption by an order of magnitude compared to other soft decoders.
Related papers
- Fault-tolerant correction-ready encoding of the [[7,1,3]] Steane code on a 2D grid [0.0]
We investigate various correction-ready encoding methods to fault-tolerantly prepare the zero-logical state of the Steane code on a 2D grid.
We show that parity-check encoding with a few Flag-Bridge qubits outperforms verification-based encoding by achieving lower error rates.
Surprisingly, compared to the resource-intensive Steane error correction, this low-overhead method still offers a practical advantage in noisy settings.
arXiv Detail & Related papers (2025-04-01T18:00:30Z) - Decoding for Punctured Convolutional and Turbo Codes: A Deep Learning Solution for Protocols Compliance [22.85778198575678]
This paper presents a unified Long Short-Term Memory (LSTM)-based decoding architecture.<n>The proposed method unifies punctured convolutional and Turbo codes.<n>A puncture embedding mechanism integrates puncturing patterns directly into the network, enabling seamless adaptation to varying code rates.
arXiv Detail & Related papers (2025-02-21T14:00:14Z) - Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary Codes [84.0257274213152]
Iterative bit flipping decoders are an efficient choice for sparse $(v,w)$-regular codes.<n>We propose concrete criteria for threshold determination, backed by a closed form model.
arXiv Detail & Related papers (2025-01-23T17:38:22Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Error Correction Code Transformer: From Non-Unified to Unified [20.902351179839282]
Traditional decoders were typically designed as fixed hardware circuits tailored to specific decoding algorithms.
This paper proposes a unified, code-agnostic Transformer-based decoding architecture capable of handling multiple linear block codes.
arXiv Detail & Related papers (2024-10-04T12:30:42Z) - Breadth-first graph traversal union-find decoder [0.0]
We develop variants of the union-find decoder that simplify its implementation and provide potential decoding speed advantages.
We show how these methods can be adapted to decode non-topological quantum low-density-parity-check codes.
arXiv Detail & Related papers (2024-07-22T18:54:45Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Testing the Accuracy of Surface Code Decoders [55.616364225463066]
Large-scale, fault-tolerant quantum computations will be enabled by quantum error-correcting codes (QECC)
This work presents the first systematic technique to test the accuracy and effectiveness of different QECC decoding schemes.
arXiv Detail & Related papers (2023-11-21T10:22:08Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - FAID Diversity via Neural Networks [23.394836086114413]
We propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes.
The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs.
arXiv Detail & Related papers (2021-05-10T05:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.