Fast Context Adaptation in Cost-Aware Continual Learning
- URL: http://arxiv.org/abs/2306.03887v1
- Date: Tue, 6 Jun 2023 17:46:48 GMT
- Title: Fast Context Adaptation in Cost-Aware Continual Learning
- Authors: Seyyidahmed Lahmer, Federico Mason, Federico Chiariotti, Andrea
Zanella
- Abstract summary: 5G and Beyond networks require more complex learning agents and the learning process itself might end up competing with users for communication and computational resources.
This creates friction: on the one hand, the learning process needs resources to quickly convergence to an effective strategy; on the other hand, the learning process needs to be efficient, i.e. take as few resources as possible from the user's data plane, so as not to throttle users' resources.
In this paper, we propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning.
- Score: 10.515324071327903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, DRL has become a valuable solution to automatically
learn efficient resource management strategies in complex networks with
time-varying statistics. However, the increased complexity of 5G and Beyond
networks requires correspondingly more complex learning agents and the learning
process itself might end up competing with users for communication and
computational resources. This creates friction: on the one hand, the learning
process needs resources to quickly convergence to an effective strategy; on the
other hand, the learning process needs to be efficient, i.e., take as few
resources as possible from the user's data plane, so as not to throttle users'
QoS. In this paper, we investigate this trade-off and propose a dynamic
strategy to balance the resources assigned to the data plane and those reserved
for learning. With the proposed approach, a learning agent can quickly converge
to an efficient resource allocation strategy and adapt to changes in the
environment as for the CL paradigm, while minimizing the impact on the users'
QoS. Simulation results show that the proposed method outperforms static
allocation methods with minimal learning overhead, almost reaching the
performance of an ideal out-of-band CL solution.
Related papers
- To Train or Not to Train: Balancing Efficiency and Training Cost in Deep Reinforcement Learning for Mobile Edge Computing [15.079887992932692]
We present a new algorithm to dynamically select when to train a Deep Reinforcement Learning (DRL) agent that allocates resources.
Our method is highly general, as it can be directly applied to any scenario involving a training overhead.
arXiv Detail & Related papers (2024-11-11T16:02:12Z) - A Comprehensive Survey on Joint Resource Allocation Strategies in Federated Edge Learning [9.806901443019008]
Federated Edge Learning (FEL) enables model training in a distributed environment while ensuring user privacy by using physical separation for each user data.
With the development of complex application scenarios such as the Internet of Things (IoT) and Smart Earth, the conventional resource allocation schemes can no longer effectively support these growing computational and communication demands.
This paper systematically addresses the multifaceted challenges of computation and communication, with the growing multiple resource demands.
arXiv Detail & Related papers (2024-10-10T13:02:00Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning [11.329273673732217]
Federated learning is an effective way to enable AI over massive distributed nodes with security.
It is challenging to ensure the privacy while maintain a coupled multi-task learning across multiple base stations (BSs) and terminals.
In this paper, inspired by the natural cloud-BS-terminal hierarchy of cellular works, we provide a viable resource-aware hierarchical federated MTL (RHFedMTL) solution.
arXiv Detail & Related papers (2023-06-01T13:49:55Z) - The Cost of Learning: Efficiency vs. Efficacy of Learning-Based RRM for
6G [10.28841351455586]
Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks.
In many scenarios, the learning task is performed in the Cloud, while experience samples are generated directly by edge nodes or users.
This creates a friction between the need to speed up convergence towards an effective strategy, which requires the allocation of resources to transmit learning samples.
We propose a dynamic balancing strategy between the learning and data planes, which allows the centralized learning agent to quickly converge to an efficient resource allocation strategy.
arXiv Detail & Related papers (2022-11-30T11:26:01Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Combining Federated and Active Learning for Communication-efficient
Distributed Failure Prediction in Aeronautics [0.0]
We present a new centralized distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning.
We evaluate this method on a public benchmark and show that its performances in terms of precision are very close to state-of-the-art performance level of non-distributed learning.
arXiv Detail & Related papers (2020-01-21T13:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.