Joint Optimization of Energy Consumption and Completion Time in
Federated Learning
- URL: http://arxiv.org/abs/2209.14900v1
- Date: Thu, 29 Sep 2022 16:05:28 GMT
- Title: Joint Optimization of Energy Consumption and Completion Time in
Federated Learning
- Authors: Xinyu Zhou, Jun Zhao, Huimei Han, Claude Guet
- Abstract summary: Federated Learning (FL) is an intriguing distributed machine learning approach due to its privacy-preserving characteristics.
We formulate an algorithm to balance the trade-off between energy and execution latency, and thus accommodate different demands and application scenarios.
- Score: 16.127019859725785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is an intriguing distributed machine learning
approach due to its privacy-preserving characteristics. To balance the
trade-off between energy and execution latency, and thus accommodate different
demands and application scenarios, we formulate an optimization problem to
minimize a weighted sum of total energy consumption and completion time through
two weight parameters. The optimization variables include bandwidth,
transmission power and CPU frequency of each device in the FL system, where all
devices are linked to a base station and train a global model collaboratively.
Through decomposing the non-convex optimization problem into two subproblems,
we devise a resource allocation algorithm to determine the bandwidth
allocation, transmission power, and CPU frequency for each participating
device. We further present the convergence analysis and computational
complexity of the proposed algorithm. Numerical results show that our proposed
algorithm not only has better performance at different weight parameters (i.e.,
different demands) but also outperforms the state of the art.
Related papers
- Federated Learning With Energy Harvesting Devices: An MDP Framework [5.852486435612777]
Federated learning (FL) requires edge devices to perform local training and exchange information with a parameter server.
A critical challenge in practical FL systems is the rapid energy depletion of battery-limited edge devices.
We apply energy harvesting technique in FL systems to extract ambient energy for continuously powering edge devices.
arXiv Detail & Related papers (2024-05-17T03:41:40Z) - Device Scheduling for Relay-assisted Over-the-Air Aggregation in
Federated Learning [9.735236606901038]
Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications.
In this paper, we propose a relay-assisted FL framework, and investigate the device scheduling problem in relay-assisted FL systems.
arXiv Detail & Related papers (2023-12-15T03:04:39Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Resource Allocation of Federated Learning for the Metaverse with Mobile
Augmented Reality [13.954907748381743]
Metaverse applications via mobile augmented reality (MAR) require rapid and accurate object detection to mix digital data with the real world.
Federated learning (FL) is an intriguing distributed machine learning approach due to its privacy-preserving characteristics.
We formulate an optimization problem to minimize a weighted combination of total energy consumption, completion time and model accuracy.
arXiv Detail & Related papers (2022-11-16T06:37:32Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.