CDMA: A Practical Cross-Device Federated Learning Algorithm for General
Minimax Problems
- URL: http://arxiv.org/abs/2105.14216v4
- Date: Thu, 29 Jun 2023 02:41:05 GMT
- Title: CDMA: A Practical Cross-Device Federated Learning Algorithm for General
Minimax Problems
- Authors: Jiahao Xie, Chao Zhang, Zebang Shen, Weijie Liu, Hui Qian
- Abstract summary: Minimax problems arise in a wide range of important applications including robust adversarial learning and Generative Adversarial Network (GAN) training.
We develop the first practical algorithm named CDMA for general minimax problems in the cross-device FL setting.
CDMA is based on a Start-Immediately-With-Enough-Responses mechanism, in which the server first signals a subset of clients to perform local computation and then starts to aggregate the local results reported by clients once it receives responses from enough clients in each round.
- Score: 21.595391808043484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Minimax problems arise in a wide range of important applications including
robust adversarial learning and Generative Adversarial Network (GAN) training.
Recently, algorithms for minimax problems in the Federated Learning (FL)
paradigm have received considerable interest. Existing federated algorithms for
general minimax problems require the full aggregation (i.e., aggregation of
local model information from all clients) in each training round. Thus, they
are inapplicable to an important setting of FL known as the cross-device
setting, which involves numerous unreliable mobile/IoT devices. In this paper,
we develop the first practical algorithm named CDMA for general minimax
problems in the cross-device FL setting. CDMA is based on a
Start-Immediately-With-Enough-Responses mechanism, in which the server first
signals a subset of clients to perform local computation and then starts to
aggregate the local results reported by clients once it receives responses from
enough clients in each round. With this mechanism, CDMA is resilient to the low
client availability. In addition, CDMA is incorporated with a lightweight
global correction in the local update steps of clients, which mitigates the
impact of slow network connections. We establish theoretical guarantees of CDMA
under different choices of hyperparameters and conduct experiments on AUC
maximization, robust adversarial network training, and GAN training tasks.
Theoretical and experimental results demonstrate the efficiency of CDMA.
Related papers
- Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked
RDMA Transmission [5.199151525305899]
FedRDMA is a communication-efficient cross-silo FL system that integrates RDMA into the FL communication protocol.
We show that sys can achieve up to 3.8$times$ speedup in communication efficiency compared to traditional TCP/IP-based FL systems.
arXiv Detail & Related papers (2024-03-01T09:14:10Z) - Wirelessly Powered Federated Learning Networks: Joint Power Transfer,
Data Sensing, Model Training, and Resource Allocation [24.077525032187893]
Federated learning (FL) has found many successes in wireless networks.
implementation of FL has been hindered by the energy limitation of mobile devices (MDs) and the availability of training data at MDs.
How to integrate wireless power transfer and sustainable sustainable FL networks.
arXiv Detail & Related papers (2023-08-09T13:38:58Z) - Federated Gradient Matching Pursuit [17.695717854068715]
Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
arXiv Detail & Related papers (2023-02-20T16:26:29Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.