Cost-Effective Federated Learning Design
- URL: http://arxiv.org/abs/2012.08336v1
- Date: Tue, 15 Dec 2020 14:45:11 GMT
- Title: Cost-Effective Federated Learning Design
- Authors: Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
- Abstract summary: Federated learning (FL) is a distributed learning paradigm that enables a large number of devices to collaboratively learn a model without sharing their raw data.
Despite its efficiency and effectiveness, the iterative on-device learning process incurs a considerable cost in terms of learning time and energy consumption.
We analyze how to design adaptive FL that optimally chooses essential control variables to minimize the total cost while ensuring convergence.
- Score: 37.16466118235272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed learning paradigm that enables a
large number of devices to collaboratively learn a model without sharing their
raw data. Despite its practical efficiency and effectiveness, the iterative
on-device learning process incurs a considerable cost in terms of learning time
and energy consumption, which depends crucially on the number of selected
clients and the number of local iterations in each training round. In this
paper, we analyze how to design adaptive FL that optimally chooses these
essential control variables to minimize the total cost while ensuring
convergence. Theoretically, we analytically establish the relationship between
the total cost and the control variables with the convergence upper bound. To
efficiently solve the cost minimization problem, we develop a low-cost
sampling-based algorithm to learn the convergence related unknown parameters.
We derive important solution properties that effectively identify the design
principles for different metric preferences. Practically, we evaluate our
theoretical results both in a simulated environment and on a hardware
prototype. Experimental evidence verifies our derived properties and
demonstrates that our proposed solution achieves near-optimal performance for
various datasets, different machine learning models, and heterogeneous system
settings.
Related papers
- Feasible Learning [78.6167929413604]
We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample.
Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance.
arXiv Detail & Related papers (2025-01-24T20:39:38Z) - Collaborative Optimization in Financial Data Mining Through Deep Learning and ResNeXt [4.047576220541502]
This study proposes a multi-task learning framework based on ResNeXt.
The proposed method delivers superior performance in terms of accuracy, F1 score, root mean square error, and other metrics.
arXiv Detail & Related papers (2024-12-23T06:14:15Z) - Learnable Sparse Customization in Heterogeneous Edge Computing [27.201987866208484]
We propose Learnable Personalized Sparsification for heterogeneous Federated learning (FedLPS)
FedLPS learns the importance of model units on local data representation and derives an importance-based sparse pattern with minimals to accurately extract personalized data features.
Experiments show that FedLPS outperforms status quo approaches in accuracy and training costs.
arXiv Detail & Related papers (2024-12-10T06:14:31Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Cost-Effective Federated Learning in Mobile Edge Networks [37.16466118235272]
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model without sharing their raw data.
We analyze how to design adaptive FL in mobile edge networks that optimally chooses essential control variables to minimize the total cost.
We develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters.
arXiv Detail & Related papers (2021-09-12T03:02:24Z) - Deep Multi-Fidelity Active Learning of High-dimensional Outputs [17.370056935194786]
We develop a deep neural network-based multi-fidelity model for learning with high-dimensional outputs.
We then propose a mutual information-based acquisition function that extends the predictive entropy principle.
We show the advantage of our method in several applications of computational physics and engineering design.
arXiv Detail & Related papers (2020-12-02T00:02:31Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.