Hybrid HMM Decoder For Convolutional Codes By Joint Trellis-Like
Structure and Channel Prior
- URL: http://arxiv.org/abs/2210.14749v1
- Date: Wed, 26 Oct 2022 14:30:17 GMT
- Title: Hybrid HMM Decoder For Convolutional Codes By Joint Trellis-Like
Structure and Channel Prior
- Authors: Haoyu Li, Xuan Wang, Tong Liu, Dingyi Fang, Baoying Liu
- Abstract summary: We propose the use of a Hidden Markov Model (HMM) for the reconstruction of convolutional codes and decoding by the Viterbi algorithm.
Our method provides superior error correction potential than the standard method because the model parameters contain channel state information (CSI)
In the multipath channel, the hybrid HMM decoder can achieve a performance gain of 4.7 dB and 2 dB when using hard-decision and soft-decision decoding.
- Score: 17.239378478086163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The anti-interference capability of wireless links is a physical layer
problem for edge computing. Although convolutional codes have inherent error
correction potential due to the redundancy introduced in the data, the
performance of the convolutional code is drastically degraded due to multipath
effects on the channel. In this paper, we propose the use of a Hidden Markov
Model (HMM) for the reconstruction of convolutional codes and decoding by the
Viterbi algorithm. Furthermore, to implement soft-decision decoding, the
observation of HMM is replaced by Gaussian mixture models (GMM). Our method
provides superior error correction potential than the standard method because
the model parameters contain channel state information (CSI). We evaluated the
performance of the method compared to standard Viterbi decoding by numerical
simulation. In the multipath channel, the hybrid HMM decoder can achieve a
performance gain of 4.7 dB and 2 dB when using hard-decision and soft-decision
decoding, respectively. The HMM decoder also achieves significant performance
gains for the RSC code, suggesting that the method could be extended to turbo
codes.
Related papers
- Error Correction Code Transformer: From Non-Unified to Unified [20.902351179839282]
Traditional decoders were typically designed as fixed hardware circuits tailored to specific decoding algorithms.
This paper proposes a unified, code-agnostic Transformer-based decoding architecture capable of handling multiple linear block codes.
arXiv Detail & Related papers (2024-10-04T12:30:42Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Testing the Accuracy of Surface Code Decoders [55.616364225463066]
Large-scale, fault-tolerant quantum computations will be enabled by quantum error-correcting codes (QECC)
This work presents the first systematic technique to test the accuracy and effectiveness of different QECC decoding schemes.
arXiv Detail & Related papers (2023-11-21T10:22:08Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Performance of teleportation-based error correction circuits for bosonic
codes with noisy measurements [58.720142291102135]
We analyze the error-correction capabilities of rotation-symmetric codes using a teleportation-based error-correction circuit.
We find that with the currently achievable measurement efficiencies in microwave optics, bosonic rotation codes undergo a substantial decrease in their break-even potential.
arXiv Detail & Related papers (2021-08-02T16:12:13Z) - Deep-Learning Based Blind Recognition of Channel Code Parameters over
Candidate Sets under AWGN and Multi-Path Fading Conditions [13.202747831999414]
We consider the problem of recovering channel code parameters over a candidate set by merely analyzing the received encoded signals.
We propose a deep learning-based solution that is capable of identifying the channel code parameters for any coding scheme.
arXiv Detail & Related papers (2020-09-16T16:10:39Z) - ADMM-based Decoder for Binary Linear Codes Aided by Deep Learning [40.25456611849273]
This work presents a deep neural network aided decoding algorithm for binary linear codes.
Based on the concept of deep unfolding, we design a decoding network by unfolding the alternating direction method of multipliers (ADMM)-penalized decoder.
Numerical results show that the resulting DL-aided decoders outperform the original ADMM-penalized decoder.
arXiv Detail & Related papers (2020-02-14T03:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.