Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles
- URL: http://arxiv.org/abs/2007.13495v1
- Date: Mon, 27 Jul 2020 12:38:37 GMT
- Title: Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles
- Authors: Yuxin Lu, Peng Cheng, Zhuo Chen, Wai Ho Mow, Yonghui Li, and Branka
Vucetic
- Abstract summary: We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
- Score: 52.79089414630366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Envisioned as a promising component of the future wireless Internet-of-Things
(IoT) networks, the non-orthogonal multiple access (NOMA) technique can support
massive connectivity with a significantly increased spectral efficiency.
Cooperative NOMA is able to further improve the communication reliability of
users under poor channel conditions. However, the conventional system design
suffers from several inherent limitations and is not optimized from the bit
error rate (BER) perspective. In this paper, we develop a novel deep
cooperative NOMA scheme, drawing upon the recent advances in deep learning
(DL). We develop a novel hybrid-cascaded deep neural network (DNN) architecture
such that the entire system can be optimized in a holistic manner. On this
basis, we construct multiple loss functions to quantify the BER performance and
propose a novel multi-task oriented two-stage training method to solve the
end-to-end training problem in a self-supervised manner. The learning mechanism
of each DNN module is then analyzed based on information theory, offering
insights into the proposed DNN architecture and its corresponding training
method. We also adapt the proposed scheme to handle the power allocation (PA)
mismatch between training and inference and incorporate it with channel coding
to combat signal deterioration. Simulation results verify its advantages over
orthogonal multiple access (OMA) and the conventional cooperative NOMA scheme
in various scenarios.
Related papers
- Deep Learning Based Joint Multi-User MISO Power Allocation and Beamforming Design [29.295165146832097]
We propose a novel unsupervised deep learning based joint power allocation and beamforming design for multi-user multiple-input single-output (MU-MISO) system.
We conduct experiments for diverse settings to compare the performance of NNBF-P with zero-forcing beamforming (ZFBF), minimum mean square error (MMSE) beamforming, and NNBF, which is also our deep learning based beamforming design without joint power allocation scheme.
arXiv Detail & Related papers (2024-06-12T16:21:11Z) - Deep Learning Based Uplink Multi-User SIMO Beamforming Design [32.00286337259923]
5G wireless communication networks offer high data rates, extensive coverage, minimal latency and energy-efficient performance.
Traditional approaches have shortcomings when it comes to computational complexity and their ability to adapt to dynamic conditions.
We propose a novel unsupervised deep learning framework, which is called NNBF, for the design of uplink receive multi-user single input multiple output (MU-SIMO) beamforming.
arXiv Detail & Related papers (2023-09-28T17:04:41Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Applications of Deep Learning to the Design of Enhanced Wireless
Communication Systems [0.0]
Deep learning (DL)-based systems are able to handle increasingly complex tasks for which no tractable models are available.
This thesis aims at comparing different approaches to unlock the full potential of DL in the physical layer.
arXiv Detail & Related papers (2022-05-02T21:02:14Z) - A Differential Game Theoretic Neural Optimizer for Training Residual
Networks [29.82841891919951]
We propose a generalized Differential Dynamic Programming (DDP) neural architecture that accepts both residual connections and convolution layers.
The resulting optimal control representation admits a gameoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented systems.
arXiv Detail & Related papers (2020-07-17T10:19:17Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.