Friendly Attacks to Improve Channel Coding Reliability
- URL: http://arxiv.org/abs/2401.14184v1
- Date: Thu, 25 Jan 2024 13:46:21 GMT
- Title: Friendly Attacks to Improve Channel Coding Reliability
- Authors: Anastasiia Kurmukova and Deniz Gunduz
- Abstract summary: "Friendly attack" aims at enhancing the performance of error correction channel codes.
Inspired by the concept of adversarial attacks, our method leverages the idea of introducing slight perturbations to the neural network input.
We demonstrate that the proposed friendly attack method can improve the reliability across different channels, modulations, codes, and decoders.
- Score: 0.33993877661368754
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a novel approach called "friendly attack" aimed at
enhancing the performance of error correction channel codes. Inspired by the
concept of adversarial attacks, our method leverages the idea of introducing
slight perturbations to the neural network input, resulting in a substantial
impact on the network's performance. By introducing small perturbations to
fixed-point modulated codewords before transmission, we effectively improve the
decoder's performance without violating the input power constraint. The
perturbation design is accomplished by a modified iterative fast gradient
method. This study investigates various decoder architectures suitable for
computing gradients to obtain the desired perturbations. Specifically, we
consider belief propagation (BP) for LDPC codes; the error correcting code
transformer, BP and neural BP (NBP) for polar codes, and neural BCJR for
convolutional codes. We demonstrate that the proposed friendly attack method
can improve the reliability across different channels, modulations, codes, and
decoders. This method allows us to increase the reliability of communication
with a legacy receiver by simply modifying the transmitted codeword
appropriately.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Error Correction Code Transformer: From Non-Unified to Unified [20.902351179839282]
Traditional decoders were typically designed as fixed hardware circuits tailored to specific decoding algorithms.
This paper proposes a unified, code-agnostic Transformer-based decoding architecture capable of handling multiple linear block codes.
arXiv Detail & Related papers (2024-10-04T12:30:42Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.