A Game-theoretic Framework for Privacy-preserving Federated Learning
- URL: http://arxiv.org/abs/2304.05836v3
- Date: Wed, 28 Feb 2024 14:08:09 GMT
- Title: A Game-theoretic Framework for Privacy-preserving Federated Learning
- Authors: Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang
- Abstract summary: We propose the first game-theoretic framework that considers both defenders and attackers in terms of their respective payoffs.
We name this game the federated learning privacy game (FLPG), in which neither defenders nor attackers are aware of all participants' payoffs.
- Score: 46.479165992905166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning, benign participants aim to optimize a global model
collaboratively. However, the risk of \textit{privacy leakage} cannot be
ignored in the presence of \textit{semi-honest} adversaries. Existing research
has focused either on designing protection mechanisms or on inventing attacking
mechanisms. While the battle between defenders and attackers seems
never-ending, we are concerned with one critical question: is it possible to
prevent potential attacks in advance? To address this, we propose the first
game-theoretic framework that considers both FL defenders and attackers in
terms of their respective payoffs, which include computational costs, FL model
utilities, and privacy leakage risks. We name this game the federated learning
privacy game (FLPG), in which neither defenders nor attackers are aware of all
participants' payoffs.
To handle the \textit{incomplete information} inherent in this situation, we
propose associating the FLPG with an \textit{oracle} that has two primary
responsibilities. First, the oracle provides lower and upper bounds of the
payoffs for the players. Second, the oracle acts as a correlation device,
privately providing suggested actions to each player. With this novel
framework, we analyze the optimal strategies of defenders and attackers.
Furthermore, we derive and demonstrate conditions under which the attacker, as
a rational decision-maker, should always follow the oracle's suggestion
\textit{not to attack}.
Related papers
- Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Toward Optimal LLM Alignments Using Two-Player Games [86.39338084862324]
In this paper, we investigate alignment through the lens of two-agent games, involving iterative interactions between an adversarial and a defensive agent.
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
Experimental results in safety scenarios demonstrate that learning in such a competitive environment not only fully trains agents but also leads to policies with enhanced generalization capabilities for both adversarial and defensive agents.
arXiv Detail & Related papers (2024-06-16T15:24:50Z) - Planning for Attacker Entrapment in Adversarial Settings [16.085007590604327]
We propose a framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker's knowledge.
Our problem formulation allows us to capture it as a much simpler infinite horizon discounted MDP, in which the optimal policy for the MDP gives the defender's strategy against the actions of the attacker.
arXiv Detail & Related papers (2023-03-01T21:08:27Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.