Robust Cell-Load Learning with a Small Sample Set
- URL: http://arxiv.org/abs/2103.11467v1
- Date: Sun, 21 Mar 2021 19:17:01 GMT
- Title: Robust Cell-Load Learning with a Small Sample Set
- Authors: Daniyal Amir Awan, Renato L.G. Cavalcante, Slawomir Stanczak
- Abstract summary: Learning of the cell-load in radio access networks (RANs) has to be performed within a short time period.
We propose a learning framework that is robust against uncertainties resulting from the need for learning based on a relatively small training sample set.
- Score: 35.07023055409166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning of the cell-load in radio access networks (RANs) has to be performed
within a short time period. Therefore, we propose a learning framework that is
robust against uncertainties resulting from the need for learning based on a
relatively small training sample set. To this end, we incorporate prior
knowledge about the cell-load in the learning framework. For example, an
inherent property of the cell-load is that it is monotonic in downlink (data)
rates. To obtain additional prior knowledge we first study the feasible rate
region, i.e., the set of all vectors of user rates that can be supported by the
network. We prove that the feasible rate region is compact. Moreover, we show
the existence of a Lipschitz function that maps feasible rate vectors to
cell-load vectors. With these results in hand, we present a learning technique
that guarantees a minimum approximation error in the worst-case scenario by
using prior knowledge and a small training sample set. Simulations in the
network simulator NS3 demonstrate that the proposed method exhibits better
robustness and accuracy than standard multivariate learning techniques,
especially for small training sample sets.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Transformers are Minimax Optimal Nonparametric In-Context Learners [36.291980654891496]
In-context learning of large language models has proven to be a surprisingly effective method of learning a new task from only a few demonstrative examples.
We develop approximation and generalization error bounds for a transformer composed of a deep neural network and one linear attention layer.
We show that sufficiently trained transformers can achieve -- and even improve upon -- the minimax optimal estimation risk in context.
arXiv Detail & Related papers (2024-08-22T08:02:10Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and
Luck [35.6883212537938]
We consider offline sparse parity learning, a supervised classification problem which admits a statistical query lower bound for gradient-based training of a multilayer perceptron.
We show, theoretically and experimentally, that sparse initialization and increasing network width yield significant improvements in sample efficiency in this setting.
We also show that the synthetic sparse parity task can be useful as a proxy for real problems requiring axis-aligned feature learning.
arXiv Detail & Related papers (2023-09-07T15:52:48Z) - Learning Representations on the Unit Sphere: Investigating Angular
Gaussian and von Mises-Fisher Distributions for Online Continual Learning [7.145581090959242]
We propose a memory-based representation learning technique equipped with our new loss functions.
We demonstrate that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries.
arXiv Detail & Related papers (2023-06-06T02:38:01Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - When less is more: Simplifying inputs aids neural network understanding [12.73748893809092]
In this work, we measure simplicity with the encoding bit size given by a pretrained generative model.
We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations.
arXiv Detail & Related papers (2022-01-14T18:58:36Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Learning to Learn to Demodulate with Uncertainty Quantification via
Bayesian Meta-Learning [59.014197664747165]
We introduce the use of Bayesian meta-learning via variational inference for the purpose of obtaining well-calibrated few-pilot demodulators.
The resulting Bayesian ensembles offer better calibrated soft decisions, at the computational cost of running multiple instances of the neural network for demodulation.
arXiv Detail & Related papers (2021-08-02T11:07:46Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Fast local linear regression with anchor regularization [21.739281173516247]
We propose a simple yet effective local model training algorithm called the fast anchor regularized local linear method (FALL)
Through experiments on synthetic and real-world datasets, we demonstrate that FALL compares favorably in terms of accuracy with the state-of-the-art network Lasso algorithm.
arXiv Detail & Related papers (2020-02-21T10:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.