Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment
- URL: http://arxiv.org/abs/2011.07782v1
- Date: Mon, 16 Nov 2020 08:24:34 GMT
- Title: Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment
- Authors: Haoran Sun, Wenqiang Pu, Minghe Zhu, Xiao Fu, Tsung-Hui Chang, Mingyi
Hong
- Abstract summary: This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
- Score: 55.91291559442884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a growing interest in developing data-driven and in particular
deep neural network (DNN) based methods for modern communication tasks. For a
few popular tasks such as power control, beamforming, and MIMO detection, these
methods achieve state-of-the-art performance while requiring less computational
efforts, less channel state information (CSI), etc. However, it is often
challenging for these approaches to learn in a dynamic environment where
parameters such as CSIs keep changing.
This work develops a methodology that enables data-driven methods to
continuously learn and optimize in a dynamic environment. Specifically, we
consider an ``episodically dynamic" setting where the environment changes in
``episodes", and in each episode the environment is stationary. We propose to
build the notion of continual learning (CL) into the modeling process of
learning wireless systems, so that the learning model can incrementally adapt
to the new episodes, {\it without forgetting} knowledge learned from the
previous episodes. Our design is based on a novel min-max formulation which
ensures certain ``fairness" across different data samples. We demonstrate the
effectiveness of the CL approach by customizing it to two popular DNN based
models (one for power control and one for beamforming), and testing using both
synthetic and real data sets. These numerical results show that the proposed CL
approach is not only able to adapt to the new scenarios quickly and seamlessly,
but importantly, it maintains high performance over the previously encountered
scenarios as well.
Related papers
- Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Context-Aware Orchestration of Energy-Efficient Gossip Learning Schemes [8.382766344930157]
We present a distributed training approach based on the combination of Gossip Learning with adaptive optimization of the learning process.
We propose a data-driven approach to OGL management that relies on optimizing in real-time for each node.
Results suggest that our approach is highly efficient and effective in a broad spectrum of network scenarios.
arXiv Detail & Related papers (2024-04-18T09:17:46Z) - Self Expanding Convolutional Neural Networks [1.4330085996657045]
We present a novel method for dynamically expanding Convolutional Neural Networks (CNNs) during training.
We employ a strategy where a single model is dynamically expanded, facilitating the extraction of checkpoints at various complexity levels.
arXiv Detail & Related papers (2024-01-11T06:22:40Z) - Adaptive Growth: Real-time CNN Layer Expansion [0.0]
This research presents a new algorithm that allows the convolutional layer of a Convolutional Neural Network (CNN) to dynamically evolve based on data input.
Instead of a rigid architecture, our approach iteratively introduces kernels to the convolutional layer, gauging its real-time response to varying data.
Remarkably, our unsupervised method has outstripped its supervised counterparts across diverse datasets.
arXiv Detail & Related papers (2023-09-06T14:43:58Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Continual Predictive Learning from Videos [100.27176974654559]
We study a new continual learning problem in the context of video prediction.
We propose the continual predictive learning (CPL) approach, which learns a mixture world model via predictive experience replay.
We construct two new benchmarks based on RoboNet and KTH, in which different tasks correspond to different physical robotic environments or human actions.
arXiv Detail & Related papers (2022-04-12T08:32:26Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Neuromodulated Neural Architectures with Local Error Signals for
Memory-Constrained Online Continual Learning [4.2903672492917755]
We develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation.
We demonstrate the efficacy of our approach on both single task and continual learning setting.
arXiv Detail & Related papers (2020-07-16T07:41:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.