Joint Learning of Probabilistic and Geometric Shaping for Coded
Modulation Systems
- URL: http://arxiv.org/abs/2004.05062v2
- Date: Tue, 14 Apr 2020 14:16:26 GMT
- Title: Joint Learning of Probabilistic and Geometric Shaping for Coded
Modulation Systems
- Authors: Fay\c{c}al Ait Aoudia and Jakob Hoydis
- Abstract summary: We introduce a trainable coded modulation scheme that enables joint optimization of the bit-wise mutual information (BMI)
The proposed approach is not restricted to symmetric probability distributions, can be optimized for any channel model, and works with any code rate $k/m$.
- Score: 12.325545487629297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a trainable coded modulation scheme that enables joint
optimization of the bit-wise mutual information (BMI) through probabilistic
shaping, geometric shaping, bit labeling, and demapping for a specific channel
model and for a wide range of signal-to-noise ratios (SNRs). Compared to
probabilistic amplitude shaping (PAS), the proposed approach is not restricted
to symmetric probability distributions, can be optimized for any channel model,
and works with any code rate $k/m$, $m$ being the number of bits per channel
use and $k$ an integer within the range from $1$ to $m-1$. The proposed scheme
enables learning of a continuum of constellation geometries and probability
distributions determined by the SNR. Additionally, the PAS architecture with
Maxwell-Boltzmann (MB) as shaping distribution was extended with a neural
network (NN) that controls the MB shaping of a quadrature amplitude modulation
(QAM) constellation according to the SNR, enabling learning of a continuum of
MB distributions for QAM. Simulations were performed to benchmark the
performance of the proposed joint probabilistic and geometric shaping scheme on
additive white Gaussian noise (AWGN) and mismatched Rayleigh block fading (RBF)
channels.
Related papers
- Sampling from Bayesian Neural Network Posteriors with Symmetric Minibatch Splitting Langevin Dynamics [0.8749675983608172]
We propose a scalable kinetic Langevin dynamics algorithm for sampling parameter spaces of big data and AI applications.
We show that the resulting Symmetric Minibatch Splitting-UBU (SMS-UBU) integrator has bias $O(h2 d1/2)$ in dimension $d>0$ with stepsize $h>0$.
We apply the algorithm to explore local modes of the posterior distribution of Bayesian neural networks (BNNs) and evaluate the calibration performance of the posterior predictive probabilities for neural networks with convolutional neural network architectures.
arXiv Detail & Related papers (2024-10-14T13:47:02Z) - Over-the-Air Split Machine Learning in Wireless MIMO Networks [56.27831295707334]
In split machine learning (ML), different partitions of a neural network (NN) are executed by different computing nodes.
To ease communication burden, over-the-air computation (OAC) can efficiently implement all or part of the computation at the same time of communication.
arXiv Detail & Related papers (2022-10-07T15:39:11Z) - Interference-Limited Ultra-Reliable and Low-Latency Communications:
Graph Neural Networks or Stochastic Geometry? [45.776476161876204]
We build a cascaded Random Edge Graph Neural Network (REGNN) to represent the repetition scheme and train it.
We analyze the violation probability using geometry in a symmetric scenario and apply a model-based Exhaustive Search (ES) method to find the optimal solution.
arXiv Detail & Related papers (2022-07-11T05:49:41Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - Neural Calibration for Scalable Beamforming in FDD Massive MIMO with
Implicit Channel Estimation [10.775558382613077]
Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems.
We propose a deep learning-based approach that directly optimize the beamformers at the base station according to the received uplink pilots.
A neural calibration method is proposed to improve the scalability of the end-to-end design.
arXiv Detail & Related papers (2021-08-03T14:26:14Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Asynchronous Distributed Reinforcement Learning for LQR Control via Zeroth-Order Block Coordinate Descent [7.6860514640178]
We propose a novel zeroth-order optimization algorithm for distributed reinforcement learning.
It allows each agent to estimate its local gradient by cost evaluation independently, without use of any consensus protocol.
arXiv Detail & Related papers (2021-07-26T18:11:07Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.