Hybrid-Layers Neural Network Architectures for Modeling the
Self-Interference in Full-Duplex Systems
- URL: http://arxiv.org/abs/2110.09997v1
- Date: Mon, 18 Oct 2021 14:18:56 GMT
- Title: Hybrid-Layers Neural Network Architectures for Modeling the
Self-Interference in Full-Duplex Systems
- Authors: Mohamed Elsayed, Ahmad A. Aziz El-Banna, Octavia A. Dobre, Wanyi Shiu,
and Peiwei Wang
- Abstract summary: Full analysis (FD) systems provide simultaneous transmission of information over frequency resources.
This article proposes two novel hybrid-layers neural network (NN) architectures to cancel localized with low complexity.
The proposed NNs exploit, in a novel manner, a combination of hidden layers (e.g., dense) in order to model the SI with lower computational complexity than the state-of-the-art NN-based cancelers.
- Score: 23.55330151898652
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Full-duplex (FD) systems have been introduced to provide high data rates for
beyond fifth-generation wireless networks through simultaneous transmission of
information over the same frequency resources. However, the operation of FD
systems is practically limited by the self-interference (SI), and efficient SI
cancelers are sought to make the FD systems realizable. Typically,
polynomial-based cancelers are employed to mitigate the SI; nevertheless, they
suffer from high complexity. This article proposes two novel hybrid-layers
neural network (NN) architectures to cancel the SI with low complexity. The
first architecture is referred to as hybrid-convolutional recurrent NN (HCRNN),
whereas the second is termed as hybrid-convolutional recurrent dense NN
(HCRDNN). In contrast to the state-of-the-art NNs that employ dense or
recurrent layers for SI modeling, the proposed NNs exploit, in a novel manner,
a combination of different hidden layers (e.g., convolutional, recurrent,
and/or dense) in order to model the SI with lower computational complexity than
the polynomial and the state-of-the-art NN-based cancelers. The key idea behind
using hybrid layers is to build an NN model, which makes use of the
characteristics of the different layers employed in its architecture. More
specifically, in the HCRNN, a convolutional layer is employed to extract the
input data features using a reduced network scale. Moreover, a recurrent layer
is then applied to assist in learning the temporal behavior of the input signal
from the localized feature map of the convolutional layer. In the HCRDNN, an
additional dense layer is exploited to add another degree of freedom for
adapting the NN settings in order to achieve the best compromise between the
cancellation performance and computational complexity. Complexity analysis and
numerical simulations are provided to prove the superiority of the proposed
architectures.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Residual resampling-based physics-informed neural network for neutron diffusion equations [7.105073499157097]
The neutron diffusion equation plays a pivotal role in the analysis of nuclear reactors.
Traditional PINN approaches often utilize fully connected network (FCN) architecture.
R2-PINN effectively overcomes the limitations inherent in current methods, providing more accurate and robust solutions for neutron diffusion equations.
arXiv Detail & Related papers (2024-06-23T13:49:31Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Low Complexity Neural Network Structures for Self-Interference
Cancellation in Full-Duplex Radio [21.402093766480746]
Two novel low complexity neural networks (NNs) are proposed for modeling SI signal with reduced computational complexity.
Two structures are referred as the ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS)
The simulation results reveal that the LWGS and MWGS-based cancelers attain the same cancellation performance as NN-based cancelers.
arXiv Detail & Related papers (2020-09-23T20:10:08Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.