Low PAPR MIMO-OFDM Design Based on Convolutional Autoencoder
- URL: http://arxiv.org/abs/2301.05017v1
- Date: Wed, 11 Jan 2023 11:35:10 GMT
- Title: Low PAPR MIMO-OFDM Design Based on Convolutional Autoencoder
- Authors: Yara Huleihel and Haim H. Permuter
- Abstract summary: A new framework for peak-to-average power ratio ($mathsfPAPR$) reduction and waveform design is presented.
A convolutional-autoencoder ($mathsfCAE$) architecture is presented.
We show that a single trained model covers the tasks of $mathsfPAPR$ reduction, spectrum design, and $mathsfMIMO$ detection together over a wide range of SNR levels.
- Score: 20.544993155126967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An enhanced framework for peak-to-average power ratio ($\mathsf{PAPR}$)
reduction and waveform design for Multiple-Input-Multiple-Output
($\mathsf{MIMO}$) orthogonal frequency-division multiplexing ($\mathsf{OFDM}$)
systems, based on a convolutional-autoencoder ($\mathsf{CAE}$) architecture, is
presented. The end-to-end learning-based autoencoder ($\mathsf{AE}$) for
communication networks represents the network by an encoder and decoder, where
in between, the learned latent representation goes through a physical
communication channel. We introduce a joint learning scheme based on projected
gradient descent iteration to optimize the spectral mask behavior and MIMO
detection under the influence of a non-linear high power amplifier
($\mathsf{HPA}$) and a multipath fading channel. The offered efficient
implementation novel waveform design technique utilizes only a single
$\mathsf{PAPR}$ reduction block for all antennas. It is throughput-lossless, as
no side information is required at the decoder. Performance is analyzed by
examining the bit error rate ($\mathsf{BER}$), the $\mathsf{PAPR}$, and the
spectral response and compared with classical $\mathsf{PAPR}$ reduction
$\mathsf{MIMO}$ detector methods on 5G simulated data. The suggested system
exhibits competitive performance when considering all optimization criteria
simultaneously. We apply gradual loss learning for multi-objective optimization
and show empirically that a single trained model covers the tasks of
$\mathsf{PAPR}$ reduction, spectrum design, and $\mathsf{MIMO}$ detection
together over a wide range of SNR levels.
Related papers
- Pruning is Optimal for Learning Sparse Features in High-Dimensions [15.967123173054535]
We show that a class of statistical models can be optimally learned using pruned neural networks trained with gradient descent.
We show that pruning neural networks proportional to the sparsity level of $boldsymbolV$ improves their sample complexity compared to unpruned networks.
arXiv Detail & Related papers (2024-06-12T21:43:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Training Overparametrized Neural Networks in Sublinear Time [14.918404733024332]
Deep learning comes at a tremendous computational and energy cost.
We present a new and a subset of binary neural networks, as a small subset of search trees, where each corresponds to a subset of search trees (Ds)
We believe this view would have further applications in analysis analysis of deep networks (Ds)
arXiv Detail & Related papers (2022-08-09T02:29:42Z) - Matching Pursuit Based Scheduling for Over-the-Air Federated Learning [67.59503935237676]
This paper develops a class of low-complexity device scheduling algorithms for over-the-air learning via the method of federated learning.
Compared to the state-of-the-art proposed scheme, the proposed scheme poses a drastically lower efficiency system.
The efficiency of the proposed scheme is confirmed via experiments on the CIFAR dataset.
arXiv Detail & Related papers (2022-06-14T08:14:14Z) - A Communication-Efficient Distributed Gradient Clipping Algorithm for
Training Deep Neural Networks [11.461878019780597]
Gradient Descent might converge slowly in some deep neural networks.
It remains mysterious whether gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup.
arXiv Detail & Related papers (2022-05-10T16:55:33Z) - ConvMath: A Convolutional Sequence Network for Mathematical Expression
Recognition [11.645568743440087]
The performance of ConvMath is evaluated on an open dataset named IM2LATEX-100K, including 103556 samples.
The proposed network achieves state-of-the-art accuracy and much better efficiency than previous methods.
arXiv Detail & Related papers (2020-12-23T12:08:18Z) - Determination of the Semion Code Threshold using Neural Decoders [0.0]
We compute the error threshold for the semion code, the companion of the Kitaev toric code with the same gauge symmetry group $mathbbZ$.
We take advantage of the near-optimal performance of some neural network decoders: multilayer perceptrons and convolutional neural networks.
arXiv Detail & Related papers (2020-02-20T10:56:47Z) - Backward Feature Correction: How Deep Learning Performs Deep
(Hierarchical) Learning [66.05472746340142]
This paper analyzes how multi-layer neural networks can perform hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective.
We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers.
arXiv Detail & Related papers (2020-01-13T17:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.