Deep Learning-based List Sphere Decoding for Faster-than-Nyquist (FTN)
Signaling Detection
- URL: http://arxiv.org/abs/2204.07569v1
- Date: Fri, 15 Apr 2022 17:46:03 GMT
- Title: Deep Learning-based List Sphere Decoding for Faster-than-Nyquist (FTN)
Signaling Detection
- Authors: Sina Abbasi and Ebrahim Bedeer
- Abstract summary: Faster-than-Nyquist (FTN) signaling is a candidate non-orthonormal transmission technique.
In this paper, we investigate the use of deep learning (DL) to reduce the detection complexity of FTN signaling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Faster-than-Nyquist (FTN) signaling is a candidate non-orthonormal
transmission technique to improve the spectral efficiency (SE) of future
communication systems. However, such improvements of the SE are at the cost of
additional computational complexity to remove the intentionally introduced
intersymbol interference. In this paper, we investigate the use of deep
learning (DL) to reduce the detection complexity of FTN signaling. To eliminate
the need of having a noise whitening filter at the receiver, we first present
an equivalent FTN signaling model based on using a set of orthonormal basis
functions and identify its operation region. Second, we propose a DL-based list
sphere decoding (DL-LSD) algorithm that selects and updates the initial radius
of the original LSD to guarantee a pre-defined number $N_{\text{L}}$ of lattice
points inside the hypersphere. This is achieved by training a neural network to
output an approximate initial radius that includes $N_{\text{L}}$ lattice
points. At the testing phase, if the hypersphere has more than $N_{\text{L}}$
lattice points, we keep the $N_{\text{L}}$ closest points to the point
corresponding to the received FTN signal; however, if the hypersphere has less
than $N_{\text{L}}$ points, we increase the approximate initial radius by a
value that depends on the standard deviation of the distribution of the output
radii from the training phase. Then, the approximate value of the
log-likelihood ratio (LLR) is calculated based on the obtained $N_{\text{L}}$
points. Simulation results show that the computational complexity of the
proposed DL-LSD is lower than its counterpart of the original LSD by orders of
magnitude.
Related papers
- Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods [43.32546195968771]
We study the data-dependent convergence and generalization behavior of gradient methods for neural networks with smooth activation.
Our results improve upon the shortcomings of the well-established Rademacher complexity-based bounds.
We show that a large step-size significantly improves upon the NTK regime's results in classifying the XOR distribution.
arXiv Detail & Related papers (2024-10-13T21:49:29Z) - Preconditioned Gradient Descent Finds Over-Parameterized Neural Networks with Sharp Generalization for Nonparametric Regression [8.130817534654089]
We consider nonparametric regression by a two-layer neural network trained by gradient descent (GD) or its variant in this paper.
We show that, if the neural network is trained with a novel Preconditioned Gradient Descent (PGD) with early stopping and the target function has spectral bias widely studied in the deep learning literature, the trained network renders a particularly sharp generalization bound with a minimax optimal rate of $cO(1/n4alpha/(4alpha+1)$.
arXiv Detail & Related papers (2024-07-16T03:38:34Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Gauss-Newton Temporal Difference Learning with Nonlinear Function Approximation [11.925232472331494]
A Gauss-Newton Temporal Difference (GNTD) learning method is proposed to solve the Q-learning problem with nonlinear function approximation.
In each iteration, our method takes one Gauss-Newton (GN) step to optimize a variant of Mean-Squared Bellman Error (MSBE)
We validate our method via extensive experiments in several RL benchmarks, where GNTD exhibits both higher rewards and faster convergence than TD-type methods.
arXiv Detail & Related papers (2023-02-25T14:14:01Z) - Low Complexity Classification Approach for Faster-than-Nyquist (FTN)
Signalling Detection [0.0]
Faster-than-Nyquist (FTN) signaling can improve the spectral efficiency (SE), but at the expense of high computational complexity.
Motivated by the recent success of ML in physical layer (PHY) problems, we investigate the use of ML in reducing the detection complexity of FTN signaling.
We propose a low-complexity classifier (LCC) that exploits the ISI structure of FTN signaling to perform the classification task in $N_p ll N$-dimensional space.
arXiv Detail & Related papers (2022-08-22T22:20:16Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Lattice partition recovery with dyadic CART [79.96359947166592]
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice.
Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature.
In this paper we consider instead the problem of partition recovery, i.e.of estimating the partition of the lattice induced by the constancy regions of the unknown signal.
We prove that a DCART-based procedure consistently estimates the underlying partition at a rate of order $sigma2 k*
arXiv Detail & Related papers (2021-05-27T23:41:01Z) - Sample Complexity and Overparameterization Bounds for Projection-Free
Neural TD Learning [38.730333068555275]
Existing analysis of neural TD learning relies on either infinite width-analysis or constraining the network parameters in a (random) compact set.
We show that the projection-free TD learning equipped with a two-layer ReLU network of any width exceeding $poly(overlinenu,1/epsilon)$ converges to the true value function with error $epsilon$ given $poly(overlinenu,1/epsilon)$ iterations or samples.
arXiv Detail & Related papers (2021-03-02T01:05:19Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.