Low Complexity Neural Network Structures for Self-Interference
Cancellation in Full-Duplex Radio
- URL: http://arxiv.org/abs/2009.11361v1
- Date: Wed, 23 Sep 2020 20:10:08 GMT
- Title: Low Complexity Neural Network Structures for Self-Interference
Cancellation in Full-Duplex Radio
- Authors: Mohamed Elsayed, Ahmad A. Aziz El-Banna, Octavia A. Dobre, Wanyi Shiu,
and Peiwei Wang
- Abstract summary: Two novel low complexity neural networks (NNs) are proposed for modeling SI signal with reduced computational complexity.
Two structures are referred as the ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS)
The simulation results reveal that the LWGS and MWGS-based cancelers attain the same cancellation performance as NN-based cancelers.
- Score: 21.402093766480746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-interference (SI) is considered as a main challenge in full-duplex (FD)
systems. Therefore, efficient SI cancelers are required for the influential
deployment of FD systems in beyond fifth-generation wireless networks. Existing
methods for SI cancellation have mostly considered the polynomial
representation of the SI signal at the receiver. These methods are shown to
operate well in practice while requiring high computational complexity.
Alternatively, neural networks (NNs) are envisioned as promising candidates for
modeling the SI signal with reduced computational complexity. Consequently, in
this paper, two novel low complexity NN structures, referred to as the
ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS), are
proposed. The core idea of these two structures is to mimic the non-linearity
and memory effect introduced to the SI signal in order to achieve proper SI
cancellation while exhibiting low computational complexity. The simulation
results reveal that the LWGS and MWGS NN-based cancelers attain the same
cancellation performance of the polynomial-based canceler while providing
49.87% and 34.19% complexity reduction, respectively.
Related papers
- Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Low Complexity Classification Approach for Faster-than-Nyquist (FTN)
Signalling Detection [0.0]
Faster-than-Nyquist (FTN) signaling can improve the spectral efficiency (SE), but at the expense of high computational complexity.
Motivated by the recent success of ML in physical layer (PHY) problems, we investigate the use of ML in reducing the detection complexity of FTN signaling.
We propose a low-complexity classifier (LCC) that exploits the ISI structure of FTN signaling to perform the classification task in $N_p ll N$-dimensional space.
arXiv Detail & Related papers (2022-08-22T22:20:16Z) - Hybrid-Layers Neural Network Architectures for Modeling the
Self-Interference in Full-Duplex Systems [23.55330151898652]
Full analysis (FD) systems provide simultaneous transmission of information over frequency resources.
This article proposes two novel hybrid-layers neural network (NN) architectures to cancel localized with low complexity.
The proposed NNs exploit, in a novel manner, a combination of hidden layers (e.g., dense) in order to model the SI with lower computational complexity than the state-of-the-art NN-based cancelers.
arXiv Detail & Related papers (2021-10-18T14:18:56Z) - Ensemble Neural Representation Networks [10.405976966708744]
Implicit Neural Representation (INR) has attracted considerable attention for storing various types of signals in continuous forms.
We propose a novel sub-optimal ensemble architecture for INR that resolves the aforementioned problems.
We show that the performance of the proposed ensemble INR architecture may decrease if the dimensions of sub-networks increase.
arXiv Detail & Related papers (2021-10-07T12:49:21Z) - Non-linear Independent Dual System (NIDS) for Discretization-independent
Surrogate Modeling over Complex Geometries [0.0]
Non-linear independent dual system (NIDS) is a deep learning surrogate model for discretization-independent, continuous representation of PDE solutions.
NIDS can be used for prediction over domains with complex, variable geometries and mesh topologies.
Test cases include a vehicle problem with complex geometry and data scarcity, enabled by a training method.
arXiv Detail & Related papers (2021-09-14T23:38:41Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Learning to Beamform in Heterogeneous Massive MIMO Networks [48.62625893368218]
It is well-known problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks.
We propose a novel deep learning based paper algorithm to address this problem.
arXiv Detail & Related papers (2020-11-08T12:48:06Z) - Non-Linear Self-Interference Cancellation via Tensor Completion [7.9264657672268894]
We propose a method based on low-rank tensor completion called canonical system identification (CSID)
Our results show that CSID is very effective in modeling cancelling non-linear SI signal and can have lower computational complexity than existing methods.
arXiv Detail & Related papers (2020-10-05T09:08:28Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.