Interference Cancellation GAN Framework for Dynamic Channels
- URL: http://arxiv.org/abs/2208.08019v1
- Date: Wed, 17 Aug 2022 02:01:18 GMT
- Title: Interference Cancellation GAN Framework for Dynamic Channels
- Authors: Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H.
Vincent Poor
- Abstract summary: We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
- Score: 74.22393885274728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symbol detection is a fundamental and challenging problem in modern
communication systems, e.g., multiuser multiple-input multiple-output (MIMO)
setting. Iterative Soft Interference Cancellation (SIC) is a state-of-the-art
method for this task and recently motivated data-driven neural network models,
e.g. DeepSIC, that can deal with unknown non-linear channels. However, these
neural network models require thorough timeconsuming training of the networks
before applying, and is thus not readily suitable for highly dynamic channels
in practice. We introduce an online training framework that can swiftly adapt
to any changes in the channel. Our proposed framework unifies the recent deep
unfolding approaches with the emerging generative adversarial networks (GANs)
to capture any changes in the channel and quickly adjust the networks to
maintain the top performance of the model. We demonstrate that our framework
significantly outperforms recent neural network models on highly dynamic
channels and even surpasses those on the static channel in our experiments.
Related papers
- Modeling of Time-varying Wireless Communication Channel with Fading and Shadowing [0.0]
We propose a new approach that combines a deep learning neural network with a mixture density network model to derive the conditional probability density function of receiving power.
Experiments on Nakagami fading channel model and Log-normal shadowing channel model with path loss and noise show that the new approach is more statistically accurate, faster, and more robust than the previous deep learning-based channel models.
arXiv Detail & Related papers (2024-05-13T21:30:50Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Achieving Robust Generalization for Wireless Channel Estimation Neural
Networks by Designed Training Data [1.0499453838486013]
We propose a method to design the training data that can support robust generalization of trained neural networks to unseen channels.
It avoids the requirement of online training for previously unseen channels, as this is a memory and processing intensive solution.
Simulation results show that the trained neural networks maintain almost identical performance on the unseen channels.
arXiv Detail & Related papers (2023-02-05T04:53:07Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Dynamic Slimmable Denoising Network [64.77565006158895]
Dynamic slimmable denoising network (DDSNet) is a general method to achieve good denoising quality with less computational complexity.
OurNet is empowered with the ability of dynamic inference by a dynamic gate.
Our experiments demonstrate our-Net consistently outperforms the state-of-the-art individually trained static denoising networks.
arXiv Detail & Related papers (2021-10-17T22:45:33Z) - End-to-end learnable EEG channel selection with deep neural networks [72.21556656008156]
We propose a framework to embed the EEG channel selection in the neural network itself.
We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters.
This generic approach is evaluated on two different EEG tasks.
arXiv Detail & Related papers (2021-02-11T13:44:07Z) - Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops [0.0]
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
arXiv Detail & Related papers (2020-11-19T21:45:58Z) - Learning to Prune in Training via Dynamic Channel Propagation [7.974413827589133]
We propose a novel network training mechanism called "dynamic channel propagation"
We pick up a specific group of channels in each convolutional layer to participate in the forward propagation in training time.
When the training ends, channels with high utility values are retained whereas those with low utility values are discarded.
arXiv Detail & Related papers (2020-07-03T04:02:41Z) - The FaceChannel: A Light-weight Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic FER are based on very deep neural networks that are difficult to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how the FaceChannel achieves a comparable, if not better, performance, as compared to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-04-17T12:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.