Predicting Flat-Fading Channels via Meta-Learned Closed-Form Linear
Filters and Equilibrium Propagation
- URL: http://arxiv.org/abs/2110.00414v1
- Date: Fri, 1 Oct 2021 14:00:23 GMT
- Title: Predicting Flat-Fading Channels via Meta-Learned Closed-Form Linear
Filters and Equilibrium Propagation
- Authors: Sangwoo Park, Osvaldo Simeone
- Abstract summary: Predicting fading channels is a classical problem with a vast array of applications.
In practice, the Doppler spectrum is unknown, and the predictor has only access to a limited time series of estimated channels.
This paper proposes to leverage meta-learning in order to mitigate the requirements in terms of training data for channel fading prediction.
- Score: 38.42468500092177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting fading channels is a classical problem with a vast array of
applications, including as an enabler of artificial intelligence (AI)-based
proactive resource allocation for cellular networks. Under the assumption that
the fading channel follows a stationary complex Gaussian process, as for
Rayleigh and Rician fading models, the optimal predictor is linear, and it can
be directly computed from the Doppler spectrum via standard linear minimum mean
squared error (LMMSE) estimation. However, in practice, the Doppler spectrum is
unknown, and the predictor has only access to a limited time series of
estimated channels. This paper proposes to leverage meta-learning in order to
mitigate the requirements in terms of training data for channel fading
prediction. Specifically, it first develops an offline low-complexity solution
based on linear filtering via a meta-trained quadratic regularization. Then, an
online method is proposed based on gradient descent and equilibrium propagation
(EP). Numerical results demonstrate the advantages of the proposed approach,
showing its capacity to approach the genie-aided LMMSE solution with a small
number of training data points.
Related papers
- Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Dimensionality Collapse: Optimal Measurement Selection for Low-Error
Infinite-Horizon Forecasting [3.5788754401889022]
We solve the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram'er-Rao lower bound (CRLB) for forecasting as the cost.
By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived.
This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting.
arXiv Detail & Related papers (2023-03-27T17:25:04Z) - Dynamic selection of p-norm in linear adaptive filtering via online
kernel-based reinforcement learning [8.319127681936815]
This study addresses the problem of selecting dynamically, at each time instance, the optimal'' p-norm to combat outliers in linear adaptive filtering.
Online and data-driven framework is designed via kernel-based reinforcement learning (KBRL)
arXiv Detail & Related papers (2022-10-20T14:49:39Z) - Predicting Multi-Antenna Frequency-Selective Channels via Meta-Learned
Linear Filters based on Long-Short Term Channel Decomposition [39.38412820403623]
We develop predictors for single-antenna frequency-flat channels based on transfer/meta-learned quadratic regularization.
We introduce transfer and meta-learning algorithms for LSTD-based prediction models.
arXiv Detail & Related papers (2022-03-23T20:38:48Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Neural Calibration for Scalable Beamforming in FDD Massive MIMO with
Implicit Channel Estimation [10.775558382613077]
Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems.
We propose a deep learning-based approach that directly optimize the beamformers at the base station according to the received uplink pilots.
A neural calibration method is proposed to improve the scalability of the end-to-end design.
arXiv Detail & Related papers (2021-08-03T14:26:14Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Fundamental limits and algorithms for sparse linear regression with
sublinear sparsity [16.3460693863947]
We establish exact expressions for the normalized mutual information and minimum mean-square-error (MMSE) of sparse linear regression.
We show how to modify the existing well-known AMP algorithms for linear regimes to sub-linear ones.
arXiv Detail & Related papers (2021-01-27T01:27:03Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.