A Comparison of Neural Networks for Wireless Channel Prediction
- URL: http://arxiv.org/abs/2308.14020v1
- Date: Sun, 27 Aug 2023 06:39:46 GMT
- Title: A Comparison of Neural Networks for Wireless Channel Prediction
- Authors: Oscar Stenhammar, Gabor Fodor, Carlo Fischione
- Abstract summary: It is unclear which neural network-based scheme provides the best performance in terms of prediction quality, training complexity and practical feasibility.
This paper first provides an overview of state-of-the-art neural networks applicable to channel prediction and compares their performance in terms of prediction quality.
The advantages and disadvantages of each neural network are discussed and guidelines for selecting the best-suited neural network in channel prediction applications are given.
- Score: 10.721189858694398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of modern wireless communications systems depends critically
on the quality of the available channel state information (CSI) at the
transmitter and receiver. Several previous works have proposed concepts and
algorithms that help maintain high quality CSI even in the presence of high
mobility and channel aging, such as temporal prediction schemes that employ
neural networks. However, it is still unclear which neural network-based scheme
provides the best performance in terms of prediction quality, training
complexity and practical feasibility. To investigate such a question, this
paper first provides an overview of state-of-the-art neural networks applicable
to channel prediction and compares their performance in terms of prediction
quality. Next, a new comparative analysis is proposed for four promising neural
networks with different prediction horizons. The well-known tapped delay
channel model recommended by the Third Generation Partnership Program is used
for a standardized comparison among the neural networks. Based on this
comparative evaluation, the advantages and disadvantages of each neural network
are discussed and guidelines for selecting the best-suited neural network in
channel prediction applications are given.
Related papers
- Robust Channel Estimation for Optical Wireless Communications Using Neural Network [0.44816207812864195]
This paper presents a robust channel estimation framework with low-complexity to mitigate frequency-selective effects.
A neural network can estimate general optical wireless channels without prior channel information about the environment.
Simulation results demonstrate that the proposed method has improved and robust normalized mean square error (NMSE) and bit error rate (BER) performance.
arXiv Detail & Related papers (2025-04-02T21:16:34Z) - Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization [45.72323731094864]
We present a theoretical framework to analyze two-layer neural network-based diffusion models.
We prove that training shallow neural networks for score prediction can be done by solving a single convex program.
Our results provide a precise characterization of what neural network-based diffusion models learn in non-asymptotic settings.
arXiv Detail & Related papers (2024-02-03T00:20:25Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning to Precode for Integrated Sensing and Communications Systems [11.689567114100514]
We present an unsupervised learning neural model to design transmit precoders for ISAC systems.
We show that the proposed method outperforms traditional optimization-based methods in presence of channel estimation errors.
arXiv Detail & Related papers (2023-03-11T11:24:18Z) - Achieving Robust Generalization for Wireless Channel Estimation Neural
Networks by Designed Training Data [1.0499453838486013]
We propose a method to design the training data that can support robust generalization of trained neural networks to unseen channels.
It avoids the requirement of online training for previously unseen channels, as this is a memory and processing intensive solution.
Simulation results show that the trained neural networks maintain almost identical performance on the unseen channels.
arXiv Detail & Related papers (2023-02-05T04:53:07Z) - Statistical Physics of Deep Neural Networks: Initialization toward
Optimal Channels [6.144858413112823]
In deep learning, neural networks serve as noisy channels between input data and its representation.
We study a frequently overlooked possibility that neural networks can be intrinsic toward optimal channels.
arXiv Detail & Related papers (2022-12-04T05:13:01Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - CCasGNN: Collaborative Cascade Prediction Based on Graph Neural Networks [0.49269463638915806]
Cascade prediction aims at modeling information diffusion in the network.
Recent efforts devoted to combining network structure and sequence features by graph neural networks and recurrent neural networks.
We propose a novel method CCasGNN considering the individual profile, structural features, and sequence information.
arXiv Detail & Related papers (2021-12-07T11:37:36Z) - Nonlinear Weighted Directed Acyclic Graph and A Priori Estimates for
Neural Networks [9.43712471169533]
We first present a novel graph theoretical formulation of neural network models, including fully connected, residual network(ResNet) and densely connected networks(DenseNet)
We extend the error analysis of the population risk for two layer networkciteew 2019prioriTwo and ResNetcitee 2019prioriRes to DenseNet, and show further that for neural networks satisfying certain mild conditions, similar estimates can be obtained.
arXiv Detail & Related papers (2021-03-30T13:54:33Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.