Learning Robust Representations for Communications over Noisy Channels
- URL: http://arxiv.org/abs/2409.01129v2
- Date: Sat, 7 Sep 2024 12:42:27 GMT
- Title: Learning Robust Representations for Communications over Noisy Channels
- Authors: Sudharsan Senthil, Shubham Paul, Nambi Seshadri, R. David Koilpillai,
- Abstract summary: This work relies solely on the tools of information theory and machine learning.
We investigate the impact of using various cost functions based on mutual information to generate robust representations for transmission under strict power constraints.
- Score: 0.44998333629984877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the use of FCNNs (Fully Connected Neural Networks) for designing end-to-end communication systems without taking any inspiration from existing classical communications models or error control coding. This work relies solely on the tools of information theory and machine learning. We investigate the impact of using various cost functions based on mutual information and pairwise distances between codewords to generate robust representations for transmission under strict power constraints. Additionally, we introduce a novel encoder structure inspired by the Barlow Twins framework. Our results show that iterative training with randomly chosen noise power levels while minimizing block error rate provides the best error performance.
Related papers
- Knowledge Distillation Based Semantic Communications For Multiple Users [10.770552656390038]
We consider the semantic communication (SemCom) system with multiple users, where there is a limited number of training samples and unexpected interference.
We propose a knowledge distillation (KD) based system where Transformer based encoder-decoder is implemented as the semantic encoder-decoder and fully connected neural networks are implemented as the channel encoder-decoder.
Numerical results demonstrate that KD significantly improves the robustness and the generalization ability when applied to unexpected interference, and it reduces the performance loss when compressing the model size.
arXiv Detail & Related papers (2023-11-23T03:28:14Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Bit Error and Block Error Rate Training for ML-Assisted Communication [18.341320787581836]
We show that the commonly used binary cross-entropy (BCE) loss is a sensible choice in uncoded systems.
We propose new loss functions targeted at minimizing the block error rate and SNR de-weighting.
arXiv Detail & Related papers (2022-10-25T15:41:21Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Capacity-Approaching Autoencoders for Communications [4.86067125387358]
The current approach to train an autoencoder relies on the use of the cross-entropy loss function.
We propose a methodology that computes an estimate of the channel capacity and constructs an optimal coded signal approaching it.
arXiv Detail & Related papers (2020-09-11T08:19:06Z) - Suppress and Balance: A Simple Gated Network for Salient Object
Detection [89.88222217065858]
We propose a simple gated network (GateNet) to solve both issues at once.
With the help of multilevel gate units, the valuable context information from the encoder can be optimally transmitted to the decoder.
In addition, we adopt the atrous spatial pyramid pooling based on the proposed "Fold" operation (Fold-ASPP) to accurately localize salient objects of various scales.
arXiv Detail & Related papers (2020-07-16T02:00:53Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z) - Neural Communication Systems with Bandwidth-limited Channel [9.332315420944836]
Reliably transmitting messages despite information loss is a core problem of information theory.
In this study we consider learning coding with the bandwidth-limited channel (BWLC)
arXiv Detail & Related papers (2020-03-30T11:58:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.