A First Order Meta Stackelberg Method for Robust Federated Learning
- URL: http://arxiv.org/abs/2306.13800v3
- Date: Sun, 16 Jul 2023 20:32:56 GMT
- Title: A First Order Meta Stackelberg Method for Robust Federated Learning
- Authors: Yunian Pan, Tao Li, Henger Li, Tianyi Xu, Zizhan Zheng, and Quanyan
Zhu
- Abstract summary: This work models adversarial federated learning as a Bayesian Stackelberg Markov game (BSMG)
We propose meta-Stackelberg learning (meta-SL), a provably efficient meta-learning algorithm, to solve the equilibrium strategy in BSMG.
We demonstrate that meta-SL converges to the first-order $varepsilon$-equilibrium point in $O(varepsilon-2)$ gradient, with $O(varepsilon-4)$ samples needed per iteration.
- Score: 19.130600532727062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous research has shown that federated learning (FL) systems are exposed
to an array of security risks. Despite the proposal of several defensive
strategies, they tend to be non-adaptive and specific to certain types of
attacks, rendering them ineffective against unpredictable or adaptive threats.
This work models adversarial federated learning as a Bayesian Stackelberg
Markov game (BSMG) to capture the defender's incomplete information of various
attack types. We propose meta-Stackelberg learning (meta-SL), a provably
efficient meta-learning algorithm, to solve the equilibrium strategy in BSMG,
leading to an adaptable FL defense. We demonstrate that meta-SL converges to
the first-order $\varepsilon$-equilibrium point in $O(\varepsilon^{-2})$
gradient iterations, with $O(\varepsilon^{-4})$ samples needed per iteration,
matching the state of the art. Empirical evidence indicates that our
meta-Stackelberg framework performs exceptionally well against potent model
poisoning and backdoor attacks of an uncertain nature.
Related papers
- Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks [15.199885837603576]
Federated learning (FL) is susceptible to a range of security threats.
We develop an efficient meta-learning approach to solve the game, leading to a robust and adaptive FL defense.
arXiv Detail & Related papers (2024-10-22T21:08:28Z) - Deep Adversarial Defense Against Multilevel-Lp Attacks [5.604868766260297]
This paper introduces a computationally efficient multilevel $ell_p$ defense, called the Efficient Robust Mode Connectivity (EMRC) method.
Similar to analytical continuation approaches used in continuous optimization, the method blends two $p$-specific adversarially optimal models.
We present experiments demonstrating that our approach performs better on various attacks as compared to AT-$ell_infty$, E-AT, and MSD.
arXiv Detail & Related papers (2024-07-12T13:30:00Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Optimal Attack and Defense for Reinforcement Learning [11.36770403327493]
In adversarial RL, an external attacker has the power to manipulate the victim agent's interaction with the environment.
We show the attacker's problem of designing a stealthy attack that maximizes its own expected reward.
We argue that the optimal defense policy for the victim can be computed as the solution to a Stackelberg game.
arXiv Detail & Related papers (2023-11-30T21:21:47Z) - MultiRobustBench: Benchmarking Robustness Against Multiple Attacks [86.70417016955459]
We present the first unified framework for considering multiple attacks against machine learning (ML) models.
Our framework is able to model different levels of learner's knowledge about the test-time adversary.
We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types.
arXiv Detail & Related papers (2023-02-21T20:26:39Z) - Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation
and Complexity Analysis [20.11993437283895]
This paper provides a game-theoretical underpinning for understanding this type of security risk.
We define the sampling attack model as a Stackelberg game between the attacker and the agent, which yields a minimax formulation.
We observe that a minor effort of the attacker can significantly deteriorate the learning performance.
arXiv Detail & Related papers (2022-07-29T21:29:29Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Multi-agent Reinforcement Learning in Bayesian Stackelberg Markov Games
for Adaptive Moving Target Defense [22.760124873882184]
We argue that existing models are inadequate in sequential settings when there is incomplete information about a rational adversary.
We propose a unifying game-theoretic model, called the Bayesian Stackelberg Markov Games (BSMGs)
We show that our learning approach converges to an SSE of a BSMG and then highlight that the learned movement policy improves the state-of-the-art in MTD for web-application security.
arXiv Detail & Related papers (2020-07-20T20:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.