Federated Frank-Wolfe Algorithm
- URL: http://arxiv.org/abs/2408.10090v1
- Date: Mon, 19 Aug 2024 15:31:06 GMT
- Title: Federated Frank-Wolfe Algorithm
- Authors: Ali Dadras, Sourasekhar Banerjee, Karthik Prakhya, Alp Yurtsever,
- Abstract summary: We propose a Federated FrankWolfe Algorithm (FedFW) for constrained machine learning problems.
FedFW features data privacy, low per-it cost, sparse communication, and iterations.
We show that FedFW finds a solution within $O(varepsilon-3)$ in the convex setting.
- Score: 7.124736158080938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has gained a lot of attention in recent years for building privacy-preserving collaborative learning systems. However, FL algorithms for constrained machine learning problems are still limited, particularly when the projection step is costly. To this end, we propose a Federated Frank-Wolfe Algorithm (FedFW). FedFW features data privacy, low per-iteration cost, and communication of sparse signals. In the deterministic setting, FedFW achieves an $\varepsilon$-suboptimal solution within $O(\varepsilon^{-2})$ iterations for smooth and convex objectives, and $O(\varepsilon^{-3})$ iterations for smooth but non-convex objectives. Furthermore, we present a stochastic variant of FedFW and show that it finds a solution within $O(\varepsilon^{-3})$ iterations in the convex setting. We demonstrate the empirical performance of FedFW on several machine learning tasks.
Related papers
- Solving a Class of Non-Convex Minimax Optimization in Federated Learning [84.98927714326908]
The minimax problems arise throughout machine learning applications, ranging from machine learning training to large-scale learning.
We propose a class of algorithms for non minimax problems (emphi) that reduce complexity to $varepsilon-6)$.
We prove that FedSGDA-M has the best sample complexity of $O(kappa2-3)$ and the best-known communication of $O(kappa2-3)$.
arXiv Detail & Related papers (2023-10-05T15:48:41Z) - Variance-Reduced Conservative Policy Iteration [45.69105313297521]
We study the sample complexity of reducing reinforcement learning to a sequence of empirical risk problems over the policy space.
We propose a variance-reduced variant of Conservative Policy It minimization.
arXiv Detail & Related papers (2022-12-12T23:31:24Z) - Local Stochastic Bilevel Optimization with Momentum-Based Variance
Reduction [104.41634756395545]
We study Federated Bilevel Optimization problems. Specifically, we first propose the FedBiO, a deterministic gradient-based algorithm.
We show FedBiO has complexity of $O(epsilon-1.5)$.
Our algorithms show superior performances compared to other baselines in numerical experiments.
arXiv Detail & Related papers (2022-05-03T16:40:22Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Accelerated Stochastic Gradient-free and Projection-free Methods [37.15461647823691]
We propose an accelerated zeroth-order Frank-Wolfe (Acc-SZOFW) based on a new reduced variance technique of STORM.
To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new reduced variance technique of STORM.
arXiv Detail & Related papers (2020-07-16T20:50:15Z) - Safe Learning under Uncertain Objectives and Constraints [66.05180398174286]
We consider non-textitunknown yet safety-critical optimization problems under textitunknown yet safety-critical constraints.
Such problems naturally arise in a variety of domains including robotics, manufacturing, and medical procedures.
A crucial component of our analysis is to introduce and apply a technique called shrinkage in the context of safe optimization.
arXiv Detail & Related papers (2020-06-23T20:51:00Z) - How Does Momentum Help Frank Wolfe? [81.95857252228537]
We unveil the connections between Frank Wolfe (FW) type algorithms and the momentum in Accelerated Gradient Methods (AGM)
On the negative side, these connections illustrate why momentum is unlikely to be effective for FW type algorithms.
The encouraging message behind this link, on the other hand, is that momentum is useful for FW on a class of problems.
arXiv Detail & Related papers (2020-06-19T13:06:00Z) - A Newton Frank-Wolfe Method for Constrained Self-Concordant Minimization [60.90222082871258]
We demonstrate how to scalably solve a class of constrained self-concordant minimization problems using linear minimization oracles (LMO) over the constraint set.
We prove that the number of LMO calls of our method is nearly the same as that of the Frank-Wolfe method in the L-smooth case.
arXiv Detail & Related papers (2020-02-17T15:28:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.