Learned layered coding for Successive Refinement in the Wyner-Ziv
Problem
- URL: http://arxiv.org/abs/2311.03061v1
- Date: Mon, 6 Nov 2023 12:45:32 GMT
- Title: Learned layered coding for Successive Refinement in the Wyner-Ziv
Problem
- Authors: Boris Joukovsky and Brent De Weerdt and Nikos Deligiannis
- Abstract summary: We propose a data-driven approach to explicitly learn the progressive encoding of a continuous source.
This setup refers to the successive refinement of the Wyner-Ziv coding problem.
We demonstrate that RNNs can explicitly retrieve layered binning solutions akin to scalable nested quantization.
- Score: 18.134147308944446
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a data-driven approach to explicitly learn the progressive
encoding of a continuous source, which is successively decoded with increasing
levels of quality and with the aid of correlated side information. This setup
refers to the successive refinement of the Wyner-Ziv coding problem. Assuming
ideal Slepian-Wolf coding, our approach employs recurrent neural networks
(RNNs) to learn layered encoders and decoders for the quadratic Gaussian case.
The models are trained by minimizing a variational bound on the rate-distortion
function of the successively refined Wyner-Ziv coding problem. We demonstrate
that RNNs can explicitly retrieve layered binning solutions akin to scalable
nested quantization. Moreover, the rate-distortion performance of the scheme is
on par with the corresponding monolithic Wyner-Ziv coding approach and is close
to the rate-distortion bound.
Related papers
- Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Boost decoding performance of finite geometry LDPC codes with deep
learning tactics [3.1519370595822274]
We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
arXiv Detail & Related papers (2022-05-01T14:41:16Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - FAID Diversity via Neural Networks [23.394836086114413]
We propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes.
The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs.
arXiv Detail & Related papers (2021-05-10T05:14:42Z) - A Learning-Based Approach to Address Complexity-Reliability Tradeoff in
OS Decoders [32.35297363281744]
We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
arXiv Detail & Related papers (2021-03-05T18:22:20Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.