Deep Learning Based Sphere Decoding
- URL: http://arxiv.org/abs/1807.03162v2
- Date: Mon, 25 Mar 2024 14:13:42 GMT
- Title: Deep Learning Based Sphere Decoding
- Authors: Mostafa Mohammadkarimi, Mehrtash Mehrabi, Masoud Ardakani, Yindi Jing,
- Abstract summary: A deep learning (DL)-based sphere decoding algorithm is proposed, where the radius of the decoding hypersphere is learned by a deep neural network (DNN)
The performance achieved by the proposed algorithm is very close to the optimal maximum likelihood decoding (MLD) over a wide range of signal-to-noise ratios (SNRs)
The computational complexity, compared to existing sphere decoding variants, is significantly reduced.
- Score: 15.810396655155975
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, a deep learning (DL)-based sphere decoding algorithm is proposed, where the radius of the decoding hypersphere is learned by a deep neural network (DNN). The performance achieved by the proposed algorithm is very close to the optimal maximum likelihood decoding (MLD) over a wide range of signal-to-noise ratios (SNRs), while the computational complexity, compared to existing sphere decoding variants, is significantly reduced. This improvement is attributed to DNN's ability of intelligently learning the radius of the hypersphere used in decoding. The expected complexity of the proposed DL-based algorithm is analytically derived and compared with existing ones. It is shown that the number of lattice points inside the decoding hypersphere drastically reduces in the DL-based algorithm in both the average and worst-case senses. The effectiveness of the proposed algorithm is shown through simulation for high-dimensional multiple-input multiple-output (MIMO) systems, using high-order modulations.
Related papers
- Deep Convolutional Neural Networks Meet Variational Shape Compactness Priors for Image Segmentation [7.314877483509877]
Shape compactness is a key geometrical property to describe interesting regions in many image segmentation tasks.
We propose two novel algorithms to solve the introduced image segmentation problem that incorporates a shape-compactness prior.
The proposed algorithms significantly improve IoU by 20% training on a highly noisy image dataset.
arXiv Detail & Related papers (2024-05-23T11:05:35Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps [26.28912742740653]
We propose a deep learning algorithm for solving high-dimensional parabolic integro-differential equations (PIDEs)
The jump-diffusion process are derived by a Brownian motion and an independent compensated Poisson random measure.
To derive the error estimates for this deep learning algorithm, the convergence of Markovian, the error bound of Euler time discretization, and the simulation error of deep learning algorithm are investigated.
arXiv Detail & Related papers (2023-01-30T13:55:42Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Partitioning sparse deep neural networks for scalable training and
inference [8.282177703075453]
State-of-the-art deep neural networks (DNNs) have significant computational and data management requirements.
Sparsification and pruning methods are shown to be effective in removing a large fraction of connections in DNNs.
The resulting sparse networks present unique challenges to further improve the computational efficiency of training and inference in deep learning.
arXiv Detail & Related papers (2021-04-23T20:05:52Z) - Joint Deep Reinforcement Learning and Unfolding: Beam Selection and
Precoding for mmWave Multiuser MIMO with Lens Arrays [54.43962058166702]
millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays have received great attention.
In this work, we investigate the joint design of a beam precoding matrix for mmWave MU-MIMO systems with DLA.
arXiv Detail & Related papers (2021-01-05T03:55:04Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - A PDD Decoder for Binary Linear Codes With Neural Check Polytope
Projection [43.97522161614078]
We propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem.
We also propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm.
We present a specially designed neural CPP (N CPP) algorithm to decrease the decoding latency.
arXiv Detail & Related papers (2020-06-11T07:57:15Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.