Sparsifying Parity-Check Matrices
- URL: http://arxiv.org/abs/2005.05051v1
- Date: Fri, 8 May 2020 05:51:40 GMT
- Title: Sparsifying Parity-Check Matrices
- Authors: Lu\'is M. S. Russo, Tobias Dietz, Jos\'e Rui Figueira, Alexandre P.
Francisco, Stefan Ruzika
- Abstract summary: We consider the problem of minimizing the number of one-entries in parity-check matrices.
In the maximum-likelihood (ML) decoding method, the number of ones in PCMs is directly related to the time required to decode messages.
We propose a simple matrix row manipulation which alters the PCM, but not the code itself.
- Score: 60.28601275219819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parity check matrices (PCMs) are used to define linear error correcting codes
and ensure reliable information transmission over noisy channels. The set of
codewords of such a code is the null space of this binary matrix.
We consider the problem of minimizing the number of one-entries in
parity-check matrices. In the maximum-likelihood (ML) decoding method, the
number of ones in PCMs is directly related to the time required to decode
messages. We propose a simple matrix row manipulation heuristic which alters
the PCM, but not the code itself. We apply simulated annealing and greedy local
searches to obtain PCMs with a small number of one entries quickly, i.e. in a
couple of minutes or hours when using mainstream hardware. The resulting
matrices provide faster ML decoding procedures, especially for large codes.
Related papers
- Optimal Quantization for Matrix Multiplication [35.007966885532724]
We present a universal quantizer based on nested lattices with an explicit guarantee of approximation error for any (non-random) pair of matrices $A$, $B$ in terms of only Frobenius norms $|A|_F, |B|_F$ and $|Atop B|_F$.
arXiv Detail & Related papers (2024-10-17T17:19:48Z) - Let the Code LLM Edit Itself When You Edit the Code [50.46536185784169]
underlinetextbfPositional textbfIntegrity textbfEncoding (PIE)
PIE reduces computational overhead by over 85% compared to the standard full recomputation approach.
Results demonstrate that PIE reduces computational overhead by over 85% compared to the standard full recomputation approach.
arXiv Detail & Related papers (2024-07-03T14:34:03Z) - Efficient Minimum Bayes Risk Decoding using Low-Rank Matrix Completion Algorithms [19.543681023903456]
We formulate Minimum Bayes Risk (MBR) decoding as a matrix completion problem.
We exploit this by only computing a random subset of the scores and efficiently recover the missing entries in the matrix.
Our experimental results on machine translation tasks demonstrate that the proposed method requires 1/16 utility metric computations.
arXiv Detail & Related papers (2024-06-05T00:54:03Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Quantum error-correcting codes from matrix-product codes related to
quasi-orthogonal and quasi-unitary matrices [18.763290930749235]
Matrix-product codes over finite fields are an important class of long linear codes.
The construction of matrix-product codes with certain self-orthogonality over finite fields is an effective way to obtain good $q$-ary quantum codes of large length.
arXiv Detail & Related papers (2020-12-31T16:17:37Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.