Joint Probability Selection and Power Allocation for Federated Learning
- URL: http://arxiv.org/abs/2401.07756v1
- Date: Mon, 15 Jan 2024 15:09:47 GMT
- Title: Joint Probability Selection and Power Allocation for Federated Learning
- Authors: Ouiame Marnissi, Hajar EL Hammouti, El Houcine Bergou
- Abstract summary: We study the performance of federated learning over wireless networks, where devices with a limited energy budget train a machine learning model.
We formulate a new probabilistic approach to jointly select clients and allocate power optimally.
Our numerical results show that the proposed approach achieves a significant performance in terms of energy consumption, completion time and accuracy as compared to the studied benchmarks.
- Score: 2.9364773826704993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the performance of federated learning over wireless
networks, where devices with a limited energy budget train a machine learning
model. The federated learning performance depends on the selection of the
clients participating in the learning at each round. Most existing studies
suggest deterministic approaches for the client selection, resulting in
challenging optimization problems that are usually solved using heuristics, and
therefore without guarantees on the quality of the final solution. We formulate
a new probabilistic approach to jointly select clients and allocate power
optimally so that the expected number of participating clients is maximized. To
solve the problem, a new alternating algorithm is proposed, where at each step,
the closed-form solutions for user selection probabilities and power
allocations are obtained. Our numerical results show that the proposed approach
achieves a significant performance in terms of energy consumption, completion
time and accuracy as compared to the studied benchmarks.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - A Comprehensive Survey On Client Selections in Federated Learning [3.438094543455187]
The selection of clients to participate in the training process is a critical factor for the performance of the overall system.
We provide a comprehensive overview of the state-of-the-art client selection techniques in Federated Learning.
arXiv Detail & Related papers (2023-11-12T10:40:43Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z) - Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies [29.127689561987964]
Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
arXiv Detail & Related papers (2020-10-03T01:04:17Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.