A Learning-Based Approach to Address Complexity-Reliability Tradeoff in
OS Decoders
- URL: http://arxiv.org/abs/2103.03860v1
- Date: Fri, 5 Mar 2021 18:22:20 GMT
- Title: A Learning-Based Approach to Address Complexity-Reliability Tradeoff in
OS Decoders
- Authors: Baptiste Cavarec, Hasan Basri Celebi, Mats Bengtsson, Mikael Skoglund
- Abstract summary: We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
- Score: 32.35297363281744
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we study the tradeoffs between complexity and reliability for
decoding large linear block codes. We show that using artificial neural
networks to predict the required order of an ordered statistics based decoder
helps in reducing the average complexity and hence the latency of the decoder.
We numerically validate the approach through Monte Carlo simulations.
Related papers
- Erasure Coded Neural Network Inference via Fisher Averaging [28.243239815823205]
Erasure-coded computing has been successfully used in cloud systems to reduce tail latency caused by factors such as straggling servers and heterogeneous traffic variations.
We design a method to code over neural networks, that is, given two or more neural network models, how to construct a coded model whose output is a linear combination of the outputs of the given neural networks.
We conduct experiments to perform erasure coding over neural networks trained on real-world vision datasets and show that the accuracy of the decoded outputs using COIN is significantly higher than other baselines.
arXiv Detail & Related papers (2024-09-02T18:46:26Z) - Learned layered coding for Successive Refinement in the Wyner-Ziv
Problem [18.134147308944446]
We propose a data-driven approach to explicitly learn the progressive encoding of a continuous source.
This setup refers to the successive refinement of the Wyner-Ziv coding problem.
We demonstrate that RNNs can explicitly retrieve layered binning solutions akin to scalable nested quantization.
arXiv Detail & Related papers (2023-11-06T12:45:32Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Modular decoding: parallelizable real-time decoding for quantum
computers [55.41644538483948]
Real-time quantum computation will require decoding algorithms capable of extracting logical outcomes from a stream of data generated by noisy quantum hardware.
We propose modular decoding, an approach capable of addressing this challenge with minimal additional communication and without sacrificing decoding accuracy.
We introduce the edge-vertex decomposition, a concrete instance of modular decoding for lattice-surgery style fault-tolerant blocks.
arXiv Detail & Related papers (2023-03-08T19:26:10Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Multifidelity data fusion in convolutional encoder/decoder networks [0.0]
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections.
We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data.
arXiv Detail & Related papers (2022-05-10T21:51:22Z) - Boost decoding performance of finite geometry LDPC codes with deep
learning tactics [3.1519370595822274]
We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
arXiv Detail & Related papers (2022-05-01T14:41:16Z) - Deep-Learning Based Linear Precoding for MIMO Channels with
Finite-Alphabet Signaling [0.5076419064097732]
This paper studies the problem of linear precoding for multiple-input multiple-output (MIMO) communication channels.
Existing solutions typically suffer from high computational complexity due to costly computations of the constellation-constrained mutual information.
A data-driven approach, based on deep learning, is proposed to tackle the problem.
arXiv Detail & Related papers (2021-11-05T13:48:45Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.