Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip
- URL: http://arxiv.org/abs/2004.01454v1
- Date: Fri, 3 Apr 2020 10:00:02 GMT
- Title: Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip
- Authors: Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu
- Abstract summary: We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
- Score: 41.28049430114734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Shannon theory states that it is asymptotically optimal to separate
the source and channel coding as two independent processes, in many practical
communication scenarios this decomposition is limited by the finite bit-length
and computational power for decoding. Recently, neural joint source-channel
coding (NECST) is proposed to sidestep this problem. While it leverages the
advancements of amortized inference and deep learning to improve the encoding
and decoding process, it still cannot always achieve compelling results in
terms of compression and error correction performance due to the limited
robustness of its learned coding networks. In this paper, motivated by the
inherent connections between neural joint source-channel coding and discrete
representation learning, we propose a novel regularization method called
Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of
the neural joint source-channel coding scheme. More specifically, on the
encoder side, we propose to explicitly maximize the mutual information between
the codeword and data; while on the decoder side, the amortized reconstruction
is regularized within an adversarial framework. Extensive experiments conducted
on various real-world datasets evidence that our IABF can achieve
state-of-the-art performances on both compression and error correction
benchmarks and outperform the baselines by a significant margin.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Joint Hierarchical Priors and Adaptive Spatial Resolution for Efficient
Neural Image Compression [11.25130799452367]
We propose an absolute image compression transformer (ICT) for neural image compression (NIC)
ICT captures both global and local contexts from the latent representations and better parameterize the distribution of the quantized latents.
Our framework significantly improves the trade-off between coding efficiency and decoder complexity over the versatile video coding (VVC) reference encoder (VTM-18.0) and the neural SwinT-ChARM.
arXiv Detail & Related papers (2023-07-05T13:17:14Z) - Deep Joint Source-Channel Coding with Iterative Source Error Correction [11.41076729592696]
We propose an iterative source error correction (ISEC) decoding scheme for deep-learning-based joint source-channel code (Deep J SCC)
Given a noisyword received through the channel, we use a Deep J SCC encoder and decoder pair to update the code iteratively.
The proposed scheme produces more reliable source reconstruction results compared to the baseline when the channel noise characteristics do not match the ones used during training.
arXiv Detail & Related papers (2023-02-17T22:50:58Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.