Content Popularity Prediction Based on Quantized Federated Bayesian
Learning in Fog Radio Access Networks
- URL: http://arxiv.org/abs/2206.12258v1
- Date: Thu, 23 Jun 2022 03:05:12 GMT
- Title: Content Popularity Prediction Based on Quantized Federated Bayesian
Learning in Fog Radio Access Networks
- Authors: Yunwei Tao, Yanxiang Jiang, Fu-Chun Zheng, Pengcheng Zhu, Dusit
Niyato, Xiaohu You
- Abstract summary: We investigate the content popularity prediction problem in cache-enabled fog radio access networks (F-RANs)
In order to predict the content popularity with high accuracy and low complexity, we propose a Gaussian process based regressor to model the content request pattern.
We utilize Bayesian learning to train the model parameters, which is robust to overfitting.
- Score: 76.16527095195893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we investigate the content popularity prediction problem in
cache-enabled fog radio access networks (F-RANs). In order to predict the
content popularity with high accuracy and low complexity, we propose a Gaussian
process based regressor to model the content request pattern. Firstly, the
relationship between content features and popularity is captured by our
proposed model. Then, we utilize Bayesian learning to train the model
parameters, which is robust to overfitting. However, Bayesian methods are
usually unable to find a closed-form expression of the posterior distribution.
To tackle this issue, we apply a stochastic variance reduced gradient
Hamiltonian Monte Carlo (SVRG-HMC) method to approximate the posterior
distribution. To utilize the computing resources of other fog access points
(F-APs) and to reduce the communications overhead, we propose a quantized
federated learning (FL) framework combining with Bayesian learning. The
quantized federated Bayesian learning framework allows each F-AP to send
gradients to the cloud server after quantizing and encoding. It can achieve a
tradeoff between prediction accuracy and communications overhead effectively.
Simulation results show that the performance of our proposed policy outperforms
the existing policies.
Related papers
- One-Shot Federated Learning with Bayesian Pseudocoresets [19.53527340816458]
We show that distributed function-space inference is tightly related to learning Bayesian pseudocoresets.
We show that this approach achieves prediction performance competitive to state-of-the-art while showing a striking reduction in communication cost of up to two orders of magnitude.
arXiv Detail & Related papers (2024-06-04T10:14:39Z) - Learning a Diffusion Model Policy from Rewards via Q-Score Matching [93.0191910132874]
We present a theoretical framework linking the structure of diffusion model policies to a learned Q-function.
We propose a new policy update method from this theory, which we denote Q-score matching.
arXiv Detail & Related papers (2023-12-18T23:31:01Z) - Calibrated One Round Federated Learning with Bayesian Inference in the
Predictive Space [27.259110269667826]
Federated Learning (FL) involves training a model over a dataset distributed among clients.
Small and noisy datasets are common, highlighting the need for well-calibrated models.
We propose $beta$-Predictive Bayes, a Bayesian FL algorithm that interpolates between a mixture and product of the predictive posteriors.
arXiv Detail & Related papers (2023-12-15T14:17:16Z) - Score-based Source Separation with Applications to Digital Communication
Signals [72.6570125649502]
We propose a new method for separating superimposed sources using diffusion-based generative models.
Motivated by applications in radio-frequency (RF) systems, we are interested in sources with underlying discrete nature.
Our method can be viewed as a multi-source extension to the recently proposed score distillation sampling scheme.
arXiv Detail & Related papers (2023-06-26T04:12:40Z) - GFlowOut: Dropout with Generative Flow Networks [76.59535235717631]
Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference.
Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference.
GFlowOutleverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks.
arXiv Detail & Related papers (2022-10-24T03:00:01Z) - Robust One Round Federated Learning with Predictive Space Bayesian
Inference [19.533268415744338]
We show how the global predictive posterior can be approximated using client predictive posteriors.
We present an algorithm based on this idea, which performs MCMC sampling at each client to obtain an estimate of the local posterior, and then aggregates these in one round to obtain a global ensemble model.
arXiv Detail & Related papers (2022-06-20T01:06:59Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Bayesian Deep Learning via Subnetwork Inference [2.2835610890984164]
We show that it suffices to perform inference over a small subset of model weights in order to obtain accurate predictive posteriors.
This subnetwork inference framework enables us to use expressive, otherwise intractable, posterior approximations over such subsets.
arXiv Detail & Related papers (2020-10-28T01:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.