RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning
- URL: http://arxiv.org/abs/2306.00675v1
- Date: Thu, 1 Jun 2023 13:49:55 GMT
- Title: RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning
- Authors: Xingfu Yi, Rongpeng Li, Chenghui Peng, Fei Wang, Jianjun Wu, and
Zhifeng Zhao
- Abstract summary: Federated learning is an effective way to enable AI over massive distributed nodes with security.
It is challenging to ensure the privacy while maintain a coupled multi-task learning across multiple base stations (BSs) and terminals.
In this paper, inspired by the natural cloud-BS-terminal hierarchy of cellular works, we provide a viable resource-aware hierarchical federated MTL (RHFedMTL) solution.
- Score: 11.329273673732217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid development of artificial intelligence (AI) over massive
applications including Internet-of-things on cellular network raises the
concern of technical challenges such as privacy, heterogeneity and resource
efficiency.
Federated learning is an effective way to enable AI over massive distributed
nodes with security.
However, conventional works mostly focus on learning a single global model
for a unique task across the network, and are generally less competent to
handle multi-task learning (MTL) scenarios with stragglers at the expense of
acceptable computation and communication cost. Meanwhile, it is challenging to
ensure the privacy while maintain a coupled multi-task learning across multiple
base stations (BSs) and terminals. In this paper, inspired by the natural
cloud-BS-terminal hierarchy of cellular works, we provide a viable
resource-aware hierarchical federated MTL (RHFedMTL) solution to meet the
heterogeneity of tasks, by solving different tasks within the BSs and
aggregating the multi-task result in the cloud without compromising the
privacy. Specifically, a primal-dual method has been leveraged to effectively
transform the coupled MTL into some local optimization sub-problems within BSs.
Furthermore, compared with existing methods to reduce resource cost by simply
changing the aggregation frequency,
we dive into the intricate relationship between resource consumption and
learning accuracy, and develop a resource-aware learning strategy for local
terminals and BSs to meet the resource budget. Extensive simulation results
demonstrate the effectiveness and superiority of RHFedMTL in terms of improving
the learning accuracy and boosting the convergence rate.
Related papers
- A Comprehensive Survey on Joint Resource Allocation Strategies in Federated Edge Learning [9.806901443019008]
Federated Edge Learning (FEL) enables model training in a distributed environment while ensuring user privacy by using physical separation for each user data.
With the development of complex application scenarios such as the Internet of Things (IoT) and Smart Earth, the conventional resource allocation schemes can no longer effectively support these growing computational and communication demands.
This paper systematically addresses the multifaceted challenges of computation and communication, with the growing multiple resource demands.
arXiv Detail & Related papers (2024-10-10T13:02:00Z) - Edge Intelligence Optimization for Large Language Model Inference with Batching and Quantization [20.631476379056892]
Large Language Models (LLMs) are at the forefront of this movement.
LLMs require cloud hosting, which raises issues regarding privacy, latency, and usage limitations.
We present an edge intelligence optimization problem tailored for LLM inference.
arXiv Detail & Related papers (2024-05-12T02:38:58Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - REFT: Resource-Efficient Federated Training Framework for Heterogeneous
and Resource-Constrained Environments [2.117841684082203]
Federated Learning (FL) plays a critical role in distributed systems.
FL emerges as a privacy-enforcing sub-domain of machine learning.
We propose "Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments"
arXiv Detail & Related papers (2023-08-25T20:33:30Z) - Serverless Federated AUPRC Optimization for Multi-Party Collaborative
Imbalanced Data Mining [119.89373423433804]
Area Under Precision-Recall (AUPRC) was introduced as an effective metric.
Serverless multi-party collaborative training can cut down the communications cost by avoiding the server node bottleneck.
We propose a new ServerLess biAsed sTochastic gradiEnt (SLATE) algorithm to directly optimize the AUPRC.
arXiv Detail & Related papers (2023-08-06T06:51:32Z) - Fast Context Adaptation in Cost-Aware Continual Learning [10.515324071327903]
5G and Beyond networks require more complex learning agents and the learning process itself might end up competing with users for communication and computational resources.
This creates friction: on the one hand, the learning process needs resources to quickly convergence to an effective strategy; on the other hand, the learning process needs to be efficient, i.e. take as few resources as possible from the user's data plane, so as not to throttle users' resources.
In this paper, we propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning.
arXiv Detail & Related papers (2023-06-06T17:46:48Z) - The Cost of Learning: Efficiency vs. Efficacy of Learning-Based RRM for
6G [10.28841351455586]
Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks.
In many scenarios, the learning task is performed in the Cloud, while experience samples are generated directly by edge nodes or users.
This creates a friction between the need to speed up convergence towards an effective strategy, which requires the allocation of resources to transmit learning samples.
We propose a dynamic balancing strategy between the learning and data planes, which allows the centralized learning agent to quickly converge to an efficient resource allocation strategy.
arXiv Detail & Related papers (2022-11-30T11:26:01Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.