Capacity-Approaching Autoencoders for Communications
- URL: http://arxiv.org/abs/2009.05273v1
- Date: Fri, 11 Sep 2020 08:19:06 GMT
- Title: Capacity-Approaching Autoencoders for Communications
- Authors: Nunzio A. Letizia, Andrea M. Tonello
- Abstract summary: The current approach to train an autoencoder relies on the use of the cross-entropy loss function.
We propose a methodology that computes an estimate of the channel capacity and constructs an optimal coded signal approaching it.
- Score: 4.86067125387358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The autoencoder concept has fostered the reinterpretation and the design of
modern communication systems. It consists of an encoder, a channel, and a
decoder block which modify their internal neural structure in an end-to-end
learning fashion. However, the current approach to train an autoencoder relies
on the use of the cross-entropy loss function. This approach can be prone to
overfitting issues and often fails to learn an optimal system and signal
representation (code). In addition, less is known about the autoencoder ability
to design channel capacity-approaching codes, i.e., codes that maximize the
input-output information under a certain power constraint. The task being even
more formidable for an unknown channel for which the capacity is unknown and
therefore it has to be learnt.
In this paper, we address the challenge of designing capacity-approaching
codes by incorporating the presence of the communication channel into a novel
loss function for the autoencoder training. In particular, we exploit the
mutual information between the transmitted and received signals as a
regularization term in the cross-entropy loss function, with the aim of
controlling the amount of information stored. By jointly maximizing the mutual
information and minimizing the cross-entropy, we propose a methodology that a)
computes an estimate of the channel capacity and b) constructs an optimal coded
signal approaching it. Several simulation results offer evidence of the
potentiality of the proposed method.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Knowledge Distillation Based Semantic Communications For Multiple Users [10.770552656390038]
We consider the semantic communication (SemCom) system with multiple users, where there is a limited number of training samples and unexpected interference.
We propose a knowledge distillation (KD) based system where Transformer based encoder-decoder is implemented as the semantic encoder-decoder and fully connected neural networks are implemented as the channel encoder-decoder.
Numerical results demonstrate that KD significantly improves the robustness and the generalization ability when applied to unexpected interference, and it reduces the performance loss when compressing the model size.
arXiv Detail & Related papers (2023-11-23T03:28:14Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Fault-tolerant Coding for Entanglement-Assisted Communication [46.0607942851373]
This paper studies the study of fault-tolerant channel coding for quantum channels.
We use techniques from fault-tolerant quantum computing to establish coding theorems for sending classical and quantum information in this scenario.
We extend these methods to the case of entanglement-assisted communication, in particular proving that the fault-tolerant capacity approaches the usual capacity when the gate error approaches zero.
arXiv Detail & Related papers (2022-10-06T14:09:16Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Neural Communication Systems with Bandwidth-limited Channel [9.332315420944836]
Reliably transmitting messages despite information loss is a core problem of information theory.
In this study we consider learning coding with the bandwidth-limited channel (BWLC)
arXiv Detail & Related papers (2020-03-30T11:58:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.