Achieving Robust Generalization for Wireless Channel Estimation Neural
Networks by Designed Training Data
- URL: http://arxiv.org/abs/2302.02302v1
- Date: Sun, 5 Feb 2023 04:53:07 GMT
- Title: Achieving Robust Generalization for Wireless Channel Estimation Neural
Networks by Designed Training Data
- Authors: Dianxin Luan, John Thompson
- Abstract summary: We propose a method to design the training data that can support robust generalization of trained neural networks to unseen channels.
It avoids the requirement of online training for previously unseen channels, as this is a memory and processing intensive solution.
Simulation results show that the trained neural networks maintain almost identical performance on the unseen channels.
- Score: 1.0499453838486013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a method to design the training data that can
support robust generalization of trained neural networks to unseen channels.
The proposed design that improves the generalization is described and analysed.
It avoids the requirement of online training for previously unseen channels, as
this is a memory and processing intensive solution, especially for battery
powered mobile terminals. To prove the validity of the proposed method, we use
the channels modelled by different standards and fading modelling for
simulation. We also use an attention-based structure and a convolutional neural
network to evaluate the generalization results achieved. Simulation results
show that the trained neural networks maintain almost identical performance on
the unseen channels.
Related papers
- Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for
Adaptive Learning [0.0]
A new type of neural network is presented using the rapid neural network paradigm.
It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy.
arXiv Detail & Related papers (2023-09-12T22:34:34Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Channelformer: Attention based Neural Solution for Wireless Channel
Estimation and Effective Online Training [1.0499453838486013]
We propose an encoder-decoder neural architecture (called Channelformer) to achieve improved channel estimation.
We employ multi-head attention in the encoder and a residual convolutional neural architecture as the decoder.
We also propose an effective online training method based on the fifth generation (5G) new radio (NR) configuration for the modern communication systems.
arXiv Detail & Related papers (2023-02-08T23:18:23Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Embedding Graph Convolutional Networks in Recurrent Neural Networks for
Predictive Monitoring [0.0]
This paper proposes an approach based on graph convolutional networks and recurrent neural networks.
An experimental evaluation on real-life event logs shows that our approach is more consistent and outperforms the current state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-17T17:30:30Z) - Subquadratic Overparameterization for Shallow Neural Networks [60.721751363271146]
We provide an analytical framework that allows us to adopt standard neural training strategies.
We achieve the desiderata viaak-Lojasiewicz, smoothness, and standard assumptions.
arXiv Detail & Related papers (2021-11-02T20:24:01Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Learning to Prune in Training via Dynamic Channel Propagation [7.974413827589133]
We propose a novel network training mechanism called "dynamic channel propagation"
We pick up a specific group of channels in each convolutional layer to participate in the forward propagation in training time.
When the training ends, channels with high utility values are retained whereas those with low utility values are discarded.
arXiv Detail & Related papers (2020-07-03T04:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.