Byzantine-Resilient Federated PCA and Low Rank Column-wise Sensing
- URL: http://arxiv.org/abs/2309.14512v3
- Date: Fri, 9 Aug 2024 15:01:03 GMT
- Title: Byzantine-Resilient Federated PCA and Low Rank Column-wise Sensing
- Authors: Ankit Pratap Singh, Namrata Vaswani,
- Abstract summary: This work considers two related learning problems in a federated attack prone setting: federated principal components analysis (PCA) and federated low rank column-wise sensing (LRCS)
The node attacks are assumed to be Byzantine which means that the attackers are omniscient and can collude.
We introduce a novel provably Byzantine-resilient communication-efficient and sampleefficient algorithm, called Subspace-Median, that solves the PCA problem and is a key part of the solution for the LRCS problem.
- Score: 17.243528378512778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work considers two related learning problems in a federated attack prone setting: federated principal components analysis (PCA) and federated low rank column-wise sensing (LRCS). The node attacks are assumed to be Byzantine which means that the attackers are omniscient and can collude. We introduce a novel provably Byzantine-resilient communication-efficient and sampleefficient algorithm, called Subspace-Median, that solves the PCA problem and is a key part of the solution for the LRCS problem. We also study the most natural Byzantine-resilient solution for federated PCA, a geometric median based modification of the federated power method, and explain why it is not useful. Our second main contribution is a complete alternating gradient descent (GD) and minimization (altGDmin) algorithm for Byzantine-resilient horizontally federated LRCS and sample and communication complexity guarantees for it. Extensive simulation experiments are used to corroborate our theoretical guarantees. The ideas that we develop for LRCS are easily extendable to other LR recovery problems as well.
Related papers
- Efficient Federated Low Rank Matrix Completion [18.471262688125645]
We develop and analyze a solution called Alternating GD and Minimization (AltGDmin) for solving the low rank matrix completion (LRMC) problem.
Our theoretical guarantees imply that AltGDmin is the most communication-efficient solution in a federated setting.
We show how our lemmas can be used to provide an improved sample complexity guarantee for AltMin.
arXiv Detail & Related papers (2024-05-10T16:12:35Z) - Decentralized Federated Policy Gradient with Byzantine Fault-Tolerance
and Provably Fast Convergence [21.935405256685307]
In Federated Reinforcement Learning (FRL), agents aim to collaboratively learn a common task, while each agent is acting in its local environment without exchanging raw trajectories.
We first propose a new centralized Byzantine fault-tolerant policy (PG) algorithm that improves existing methods by relying only on assumptions standard for non-fault-tolerant PG.
arXiv Detail & Related papers (2024-01-07T14:06:06Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z) - Resilient Output Consensus Control of Heterogeneous Multi-agent Systems
against Byzantine Attacks: A Twin Layer Approach [23.824617731137877]
We study the problem of cooperative control of heterogeneous multi-agent systems (MASs) against Byzantine attacks.
Inspired by the concept of Digital Twin, a new hierarchical protocol equipped with a virtual twin layer (TL) is proposed.
arXiv Detail & Related papers (2023-03-22T18:23:21Z) - A novel approach for Fair Principal Component Analysis based on
eigendecomposition [10.203602318836444]
We propose a novel PCA algorithm which tackles fairness issues by means of a simple strategy comprising a one-dimensional search.
Our findings are consistent in several real situations as well as in scenarios with both unbalanced and balanced datasets.
arXiv Detail & Related papers (2022-08-24T08:20:16Z) - Actor-Critic based Improper Reinforcement Learning [61.430513757337486]
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process.
We propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic scheme and a Natural Actor-Critic scheme.
arXiv Detail & Related papers (2022-07-19T05:55:02Z) - Byzantine-Robust Federated Linear Bandits [27.77095339417368]
We study a linear bandit optimization problem in a federated setting where a large collection of distributed agents collaboratively learn a common linear bandit model.
Standard federated learning algorithms are vulnerable to Byzantine attacks on even a small fraction of agents.
We show that our proposed algorithm is robust to Byzantine attacks on fewer than half of agents and achieves a sublinear $tildemathcalO(T3/4)$ regret with $mathcalO(sqrtT)$ steps of communication.
arXiv Detail & Related papers (2022-04-03T20:22:26Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.