Low Complexity Classification Approach for Faster-than-Nyquist (FTN)
Signalling Detection
- URL: http://arxiv.org/abs/2208.10637v1
- Date: Mon, 22 Aug 2022 22:20:16 GMT
- Title: Low Complexity Classification Approach for Faster-than-Nyquist (FTN)
Signalling Detection
- Authors: Sina Abbasi and Ebrahim Bedeer
- Abstract summary: Faster-than-Nyquist (FTN) signaling can improve the spectral efficiency (SE), but at the expense of high computational complexity.
Motivated by the recent success of ML in physical layer (PHY) problems, we investigate the use of ML in reducing the detection complexity of FTN signaling.
We propose a low-complexity classifier (LCC) that exploits the ISI structure of FTN signaling to perform the classification task in $N_p ll N$-dimensional space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Faster-than-Nyquist (FTN) signaling can improve the spectral efficiency (SE);
however, at the expense of high computational complexity to remove the
introduced intersymbol interference (ISI). Motivated by the recent success of
ML in physical layer (PHY) problems, in this paper we investigate the use of ML
in reducing the detection complexity of FTN signaling. In particular, we view
the FTN signaling detection problem as a classification task, where the
received signal is considered as an unlabeled class sample that belongs to a
set of all possible classes samples. If we use an off-shelf classifier, then
the set of all possible classes samples belongs to an $N$-dimensional space,
where $N$ is the transmission block length, which has a huge computational
complexity. We propose a low-complexity classifier (LCC) that exploits the ISI
structure of FTN signaling to perform the classification task in $N_p \ll
N$-dimension space. The proposed LCC consists of two stages: 1) offline
pre-classification that constructs the labeled classes samples in the
$N_p$-dimensional space and 2) online classification where the detection of the
received samples occurs. The proposed LCC is extended to produce soft-outputs
as well. Simulation results show the effectiveness of the proposed LCC in
balancing performance and complexity.
Related papers
- Renormalized Connection for Scale-preferred Object Detection in Satellite Imagery [51.83786195178233]
We design a Knowledge Discovery Network (KDN) to implement the renormalization group theory in terms of efficient feature extraction.
Renormalized connection (RC) on the KDN enables synergistic focusing'' of multi-scale features.
RCs extend the multi-level feature's divide-and-conquer'' mechanism of the FPN-based detectors to a wide range of scale-preferred tasks.
arXiv Detail & Related papers (2024-09-09T13:56:22Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - LSTM and CNN application for core-collapse supernova search in
gravitational wave real data [0.0]
Core-collapse supernovae (CCSNe) are expected to emit gravitational wave signals that could be detected by interferometers within the Milky Way and nearby galaxies.
We show potential of machine learning (ML) for multi-label classification of different CCSNe simulated signals and noise transients using real data.
arXiv Detail & Related papers (2023-01-23T12:12:33Z) - Deep Learning-based List Sphere Decoding for Faster-than-Nyquist (FTN)
Signaling Detection [0.0]
Faster-than-Nyquist (FTN) signaling is a candidate non-orthonormal transmission technique.
In this paper, we investigate the use of deep learning (DL) to reduce the detection complexity of FTN signaling.
arXiv Detail & Related papers (2022-04-15T17:46:03Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - iNNformant: Boundary Samples as Telltale Watermarks [68.8204255655161]
We show that it is possible to generate sets of boundary samples which can identify any of four tested microarchitectures.
These sets can be built to not contain any sample with a worse peak signal-to-noise ratio than 70dB.
arXiv Detail & Related papers (2021-06-14T11:18:32Z) - Object Detection Made Simpler by Eliminating Heuristic NMS [70.93004137521946]
We show a simple NMS-free, end-to-end object detection framework.
We attain on par or even improved detection accuracy compared with the original one-stage detector.
arXiv Detail & Related papers (2021-01-28T02:38:29Z) - Low Complexity Neural Network Structures for Self-Interference
Cancellation in Full-Duplex Radio [21.402093766480746]
Two novel low complexity neural networks (NNs) are proposed for modeling SI signal with reduced computational complexity.
Two structures are referred as the ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS)
The simulation results reveal that the LWGS and MWGS-based cancelers attain the same cancellation performance as NN-based cancelers.
arXiv Detail & Related papers (2020-09-23T20:10:08Z) - Hadamard Wirtinger Flow for Sparse Phase Retrieval [24.17778927729799]
We consider the problem of reconstructing an $n$-dimensional $k$-sparse signal from a set of noiseless magnitude-only measurements.
Formulating the problem as an unregularized empirical risk minimization task, we study the sample complexity performance of gradient descent with Hadamard parametrization.
We numerically investigate the performance of HWF at convergence and show that, while not requiring any explicit form of regularization nor knowledge of $k$, HWF adapts to the signal sparsity and reconstructs sparse signals with fewer measurements than existing gradient based methods.
arXiv Detail & Related papers (2020-06-01T16:41:27Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.