Discriminative Mutual Information Estimators for Channel Capacity
Learning
- URL: http://arxiv.org/abs/2107.03084v1
- Date: Wed, 7 Jul 2021 09:03:40 GMT
- Title: Discriminative Mutual Information Estimators for Channel Capacity
Learning
- Authors: Nunzio A. Letizia and Andrea M. Tonello
- Abstract summary: We propose a novel framework to automatically learn the channel capacity, for any type of memory-less channel.
We include the discriminator in a cooperative channel capacity learning framework, referred to as CORTICAL.
We prove that a particular choice of cooperative value function solves the channel capacity estimation problem.
- Score: 1.8275108630751837
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Channel capacity plays a crucial role in the development of modern
communication systems as it represents the maximum rate at which information
can be reliably transmitted over a communication channel. Nevertheless, for the
majority of channels, finding a closed-form capacity expression remains an open
challenge. This is because it requires to carry out two formidable tasks a) the
computation of the mutual information between the channel input and output, and
b) its maximization with respect to the signal distribution at the channel
input. In this paper, we address both tasks. Inspired by implicit generative
models, we propose a novel cooperative framework to automatically learn the
channel capacity, for any type of memory-less channel. In particular, we
firstly develop a new methodology to estimate the mutual information directly
from a discriminator typically deployed to train adversarial networks, referred
to as discriminative mutual information estimator (DIME). Secondly, we include
the discriminator in a cooperative channel capacity learning framework,
referred to as CORTICAL, where a discriminator learns to distinguish between
dependent and independent channel input-output samples while a generator learns
to produce the optimal channel input distribution for which the discriminator
exhibits the best performance. Lastly, we prove that a particular choice of the
cooperative value function solves the channel capacity estimation problem.
Simulation results demonstrate that the proposed method offers high accuracy.
Related papers
- Maximal-Capacity Discrete Memoryless Channel Identification [37.598696937684245]
The problem of identifying the channel with the highest capacity among several memoryless channels (DMCs) is considered.
A capacity estimator is proposed and tight confidence bounds on the estimator error are derived.
A gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution.
Two additional algorithms NaiveChanSel and MedianChanEl, that output with certain confidence a DMC with capacity close to the maximal, are introduced.
arXiv Detail & Related papers (2024-01-18T18:44:10Z) - Channel Simulation: Finite Blocklengths and Broadcast Channels [13.561997774592667]
We study channel simulation under common randomness assistance in the finite-blocklength regime.
We identify the smooth channel max-information as a linear program one-shot converse on the minimal simulation cost for fixed error tolerance.
arXiv Detail & Related papers (2022-12-22T13:08:55Z) - Data-Driven Upper Bounds on Channel Capacity [4.974890682815778]
We consider the problem of estimating an upper bound on the capacity of a memoryless channel with unknown alphabet output.
A novel-driven algorithm is proposed that exploits dual representation where the minimization over the input distribution is replaced with a reference distribution on the channel output.
arXiv Detail & Related papers (2022-05-13T06:59:31Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - FedRec: Federated Learning of Universal Receivers over Fading Channels [92.15358738530037]
We propose a neural network-based symbol detection technique for downlink fading channels.
Multiple users collaborate to jointly learn a universal data-driven detector, hence the name FedRec.
The performance of the resulting receiver is shown to approach the MAP performance in diverse channel conditions without requiring knowledge of the fading statistics.
arXiv Detail & Related papers (2020-11-14T11:29:55Z) - Learning from Heterogeneous EEG Signals with Differentiable Channel
Reordering [51.633889765162685]
CHARM is a method for training a single neural network across inconsistent input channels.
We perform experiments on four EEG classification datasets and demonstrate the efficacy of CHARM.
arXiv Detail & Related papers (2020-10-21T12:32:34Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Focus of Attention Improves Information Transfer in Visual Features [80.22965663534556]
This paper focuses on unsupervised learning for transferring visual information in a truly online setting.
The computation of the entropy terms is carried out by a temporal process which yields online estimation of the entropy terms.
In order to better structure the input probability distribution, we use a human-like focus of attention model.
arXiv Detail & Related papers (2020-06-16T15:07:25Z) - Capacity of Continuous Channels with Memory via Directed Information
Neural Estimator [15.372626012233736]
This work proposes a novel capacity estimation algorithm that treats the channel as a blackbox'
The algorithm has two main ingredients: (i) a neural distribution transformer (NDT) model that shapes a noise variable into the channel input distribution, and (ii) the neural DI estimator (DINE) that estimates the communication rate of the current NDT model.
arXiv Detail & Related papers (2020-03-09T14:53:56Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.