Residual-Aided End-to-End Learning of Communication System without Known
Channel
- URL: http://arxiv.org/abs/2102.10786v1
- Date: Mon, 22 Feb 2021 05:47:49 GMT
- Title: Residual-Aided End-to-End Learning of Communication System without Known
Channel
- Authors: Hao Jiang, Shuangkaisheng Bi, and Linglong Dai
- Abstract summary: A generative adversarial network (GAN) based training scheme has been recently proposed to imitate the real channel.
We propose a residual aided GAN (RA-GAN) based training scheme in this paper.
We show that the proposed RA-GAN based training scheme can achieve the near-optimal block error rate (BLER) performance.
- Score: 12.66262880667583
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Leveraging powerful deep learning techniques, the end-to-end (E2E) learning
of communication system is able to outperform the classical communication
system. Unfortunately, this communication system cannot be trained by deep
learning without known channel. To deal with this problem, a generative
adversarial network (GAN) based training scheme has been recently proposed to
imitate the real channel. However, the gradient vanishing and overfitting
problems of GAN will result in the serious performance degradation of E2E
learning of communication system. To mitigate these two problems, we propose a
residual aided GAN (RA-GAN) based training scheme in this paper. Particularly,
inspired by the idea of residual learning, we propose a residual generator to
mitigate the gradient vanishing problem by realizing a more robust gradient
backpropagation. Moreover, to cope with the overfitting problem, we reconstruct
the loss function for training by adding a regularizer, which limits the
representation ability of RA-GAN. Simulation results show that the trained
residual generator has better generation performance than the conventional
generator, and the proposed RA-GAN based training scheme can achieve the
near-optimal block error rate (BLER) performance with a negligible
computational complexity increase in both the theoretical channel model and the
ray-tracing based channel dataset.
Related papers
- Sparse Training for Federated Learning with Regularized Error Correction [9.852567834643292]
Federated Learning (FL) has attracted much interest due to the significant advantages it brings to training deep neural network (DNN) models.
FLARE presents a novel sparse training approach via accumulated pulling of the updated models with regularization on the embeddings in the FL process.
The performance of FLARE is validated through extensive experiments on diverse and complex models, achieving a remarkable sparsity level (10 times and more beyond the current state-of-the-art) along with significantly improved accuracy.
arXiv Detail & Related papers (2023-12-21T12:36:53Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Deep Deterministic Policy Gradient for End-to-End Communication Systems
without Prior Channel Knowledge [8.48741007380969]
End-to-End (E2E) learning-based concept has been recently introduced to jointly optimize both the transmitter and the receiver in wireless communication systems.
This paper aims to solve this issue by developing a deep deterministic policy gradient (DDPG)-based framework.
arXiv Detail & Related papers (2023-05-12T13:05:32Z) - Learning to Precode for Integrated Sensing and Communications Systems [11.689567114100514]
We present an unsupervised learning neural model to design transmit precoders for ISAC systems.
We show that the proposed method outperforms traditional optimization-based methods in presence of channel estimation errors.
arXiv Detail & Related papers (2023-03-11T11:24:18Z) - Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification
in the Presence of Data Heterogeneity [60.791736094073]
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
We propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD.
The proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-02-19T17:42:35Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Step-Ahead Error Feedback for Distributed Training with Compressed
Gradient [99.42912552638168]
We show that a new "gradient mismatch" problem is raised by the local error feedback in centralized distributed training.
We propose two novel techniques, 1) step ahead and 2) error averaging, with rigorous theoretical analysis.
arXiv Detail & Related papers (2020-08-13T11:21:07Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Accumulated Polar Feature-based Deep Learning for Efficient and
Lightweight Automatic Modulation Classification with Channel Compensation
Mechanism [6.915743897443897]
In next-generation communications, massive machine-type communications (mMTC) induce severe burden on base stations.
Deep learning (DL) technique stores intelligence in the network, resulting in superior performance over traditional approaches.
In this work, an accumulated polar feature-based DL with a channel compensation mechanism is proposed to cope with the aforementioned issues.
arXiv Detail & Related papers (2020-01-06T04:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.