Improving Learning Efficiency for Wireless Resource Allocation with
Symmetric Prior
- URL: http://arxiv.org/abs/2005.08510v4
- Date: Thu, 11 Nov 2021 06:14:21 GMT
- Title: Improving Learning Efficiency for Wireless Resource Allocation with
Symmetric Prior
- Authors: Chengjian Sun, Jiajun Wu and Chenyang Yang
- Abstract summary: In this article, we first briefly summarize two classes of approaches to using domain knowledge: introducing mathematical models or prior knowledge to deep learning.
To explain how such a generic prior is harnessed to improve learning efficiency, we resort to ranking.
We find that the required training samples to achieve given system performance decreases with the number of subcarriers or contents.
- Score: 28.275250620630466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Improving learning efficiency is paramount for learning resource allocation
with deep neural networks (DNNs) in wireless communications over highly dynamic
environments. Incorporating domain knowledge into learning is a promising way
of dealing with this issue, which is an emerging topic in the wireless
community. In this article, we first briefly summarize two classes of
approaches to using domain knowledge: introducing mathematical models or prior
knowledge to deep learning. Then, we consider a kind of symmetric prior,
permutation equivariance, which widely exists in wireless tasks. To explain how
such a generic prior is harnessed to improve learning efficiency, we resort to
ranking, which jointly sorts the input and output of a DNN. We use power
allocation among subcarriers, probabilistic content caching, and interference
coordination to illustrate the improvement of learning efficiency by exploiting
the property. From the case study, we find that the required training samples
to achieve given system performance decreases with the number of subcarriers or
contents, owing to an interesting phenomenon: "sample hardening". Simulation
results show that the training samples, the free parameters in DNNs and the
training time can be reduced dramatically by harnessing the prior knowledge.
The samples required to train a DNN after ranking can be reduced by $15 \sim
2,400$ folds to achieve the same system performance as the counterpart without
using prior.
Related papers
- Towards Explainable Machine Learning: The Effectiveness of Reservoir
Computing in Wireless Receive Processing [21.843365090029987]
We investigate the specific task of channel equalization by applying a popular learning-based technique known as Reservoir Computing (RC)
RC has shown superior performance compared to conventional methods and other learning-based approaches.
We also show the improvement in receive processing/symbol detection performance with this optimized through simulations.
arXiv Detail & Related papers (2023-10-08T00:44:35Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Training Networks in Null Space of Feature Covariance for Continual
Learning [34.095874368589904]
We propose a novel network training algorithm called Adam-NSCL, which sequentially optimize network parameters in the null space of previous tasks.
We apply our approach to training networks for continual learning on benchmark datasets of CIFAR-100 and TinyImageNet.
arXiv Detail & Related papers (2021-03-12T07:21:48Z) - ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification [21.933876113300897]
We adopt an alternative approach by transfer learning on an ensemble of related tasks using prototypical networks under the meta-learning paradigm.
Using intent classification as a case study, we demonstrate that increasing variability in training tasks can significantly improve classification performance.
arXiv Detail & Related papers (2021-01-28T00:19:13Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Collaborative Method for Incremental Learning on Classification and
Generation [32.07222897378187]
We introduce a novel algorithm, Incremental Class Learning with Attribute Sharing (ICLAS), for incremental class learning with deep neural networks.
As one of its component, incGAN, can generate images with increased variety compared with the training data.
Under challenging environment of data deficiency, ICLAS incrementally trains classification and the generation networks.
arXiv Detail & Related papers (2020-10-29T06:34:53Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.