Robust Cell-Load Learning with a Small Sample Set
- URL: http://arxiv.org/abs/2103.11467v1
- Date: Sun, 21 Mar 2021 19:17:01 GMT
- Title: Robust Cell-Load Learning with a Small Sample Set
- Authors: Daniyal Amir Awan, Renato L.G. Cavalcante, Slawomir Stanczak
- Abstract summary: Learning of the cell-load in radio access networks (RANs) has to be performed within a short time period.
We propose a learning framework that is robust against uncertainties resulting from the need for learning based on a relatively small training sample set.
- Score: 35.07023055409166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning of the cell-load in radio access networks (RANs) has to be performed
within a short time period. Therefore, we propose a learning framework that is
robust against uncertainties resulting from the need for learning based on a
relatively small training sample set. To this end, we incorporate prior
knowledge about the cell-load in the learning framework. For example, an
inherent property of the cell-load is that it is monotonic in downlink (data)
rates. To obtain additional prior knowledge we first study the feasible rate
region, i.e., the set of all vectors of user rates that can be supported by the
network. We prove that the feasible rate region is compact. Moreover, we show
the existence of a Lipschitz function that maps feasible rate vectors to
cell-load vectors. With these results in hand, we present a learning technique
that guarantees a minimum approximation error in the worst-case scenario by
using prior knowledge and a small training sample set. Simulations in the
network simulator NS3 demonstrate that the proposed method exhibits better
robustness and accuracy than standard multivariate learning techniques,
especially for small training sample sets.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and
Luck [35.6883212537938]
We consider offline sparse parity learning, a supervised classification problem which admits a statistical query lower bound for gradient-based training of a multilayer perceptron.
We show, theoretically and experimentally, that sparse initialization and increasing network width yield significant improvements in sample efficiency in this setting.
We also show that the synthetic sparse parity task can be useful as a proxy for real problems requiring axis-aligned feature learning.
arXiv Detail & Related papers (2023-09-07T15:52:48Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Learning Representations on the Unit Sphere: Investigating Angular
Gaussian and von Mises-Fisher Distributions for Online Continual Learning [7.145581090959242]
We propose a memory-based representation learning technique equipped with our new loss functions.
We demonstrate that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries.
arXiv Detail & Related papers (2023-06-06T02:38:01Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - When less is more: Simplifying inputs aids neural network understanding [12.73748893809092]
In this work, we measure simplicity with the encoding bit size given by a pretrained generative model.
We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations.
arXiv Detail & Related papers (2022-01-14T18:58:36Z) - Learning to Learn to Demodulate with Uncertainty Quantification via
Bayesian Meta-Learning [59.014197664747165]
We introduce the use of Bayesian meta-learning via variational inference for the purpose of obtaining well-calibrated few-pilot demodulators.
The resulting Bayesian ensembles offer better calibrated soft decisions, at the computational cost of running multiple instances of the neural network for demodulation.
arXiv Detail & Related papers (2021-08-02T11:07:46Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Fast local linear regression with anchor regularization [21.739281173516247]
We propose a simple yet effective local model training algorithm called the fast anchor regularized local linear method (FALL)
Through experiments on synthetic and real-world datasets, we demonstrate that FALL compares favorably in terms of accuracy with the state-of-the-art network Lasso algorithm.
arXiv Detail & Related papers (2020-02-21T10:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.