Non-Linear Self-Interference Cancellation via Tensor Completion
- URL: http://arxiv.org/abs/2010.01868v1
- Date: Mon, 5 Oct 2020 09:08:28 GMT
- Title: Non-Linear Self-Interference Cancellation via Tensor Completion
- Authors: Freek Jochems and Alexios Balatsoukas-Stimming
- Abstract summary: We propose a method based on low-rank tensor completion called canonical system identification (CSID)
Our results show that CSID is very effective in modeling cancelling non-linear SI signal and can have lower computational complexity than existing methods.
- Score: 7.9264657672268894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-linear self-interference (SI) cancellation constitutes a fundamental
problem in full-duplex communications, which is typically tackled using either
polynomial models or neural networks. In this work, we explore the
applicability of a recently proposed method based on low-rank tensor
completion, called canonical system identification (CSID), to non-linear SI
cancellation. Our results show that CSID is very effective in modeling and
cancelling the non-linear SI signal and can have lower computational complexity
than existing methods, albeit at the cost of increased memory requirements.
Related papers
- Exact identification of nonlinear dynamical systems by Trimmed Lasso [0.0]
Identification of nonlinear dynamical systems has been popularized by sparse identification of the nonlinear dynamics (SINDy) algorithm.
E-SINDy was proposed for model identification, handling finite, highly noisy data.
In this paper, we demonstrate that the Trimmed Lasso for robust identification of models (TRIM) can provide exact recovery under more severe noise, finite data, and multicollinearity as opposed to E-SINDy.
arXiv Detail & Related papers (2023-08-03T17:37:18Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - On the Stability Properties and the Optimization Landscape of Training
Problems with Squared Loss for Neural Networks and General Nonlinear Conic
Approximation Schemes [0.0]
We study the optimization landscape and the stability properties of training problems with squared loss for neural networks and general nonlinear conic approximation schemes.
We prove that the same effects that are responsible for these instability properties are also the reason for the emergence of saddle points and spurious local minima.
arXiv Detail & Related papers (2020-11-06T11:34:59Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Low Complexity Neural Network Structures for Self-Interference
Cancellation in Full-Duplex Radio [21.402093766480746]
Two novel low complexity neural networks (NNs) are proposed for modeling SI signal with reduced computational complexity.
Two structures are referred as the ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS)
The simulation results reveal that the LWGS and MWGS-based cancelers attain the same cancellation performance as NN-based cancelers.
arXiv Detail & Related papers (2020-09-23T20:10:08Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.