Enhancing Privacy of Spatiotemporal Federated Learning against Gradient Inversion Attacks
- URL: http://arxiv.org/abs/2407.08529v3
- Date: Mon, 15 Jul 2024 06:42:31 GMT
- Title: Enhancing Privacy of Spatiotemporal Federated Learning against Gradient Inversion Attacks
- Authors: Lele Zheng, Yang Cao, Renhe Jiang, Kenjiro Taura, Yulong Shen, Sheng Li, Masatoshi Yoshikawa,
- Abstract summary: We propose Stemporal Gradient Inversion Attack (GIA), a gradient attack algorithm tailored totemporal data.
We design an adaptive defense strategy to mitigate gradient inversion attacks intemporal federated learning.
We reveal that the proposed defense strategy can well preserve the utility oftemporal federated learning with effective security protection.
- Score: 30.785476975412482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatiotemporal federated learning has recently raised intensive studies due to its ability to train valuable models with only shared gradients in various location-based services. On the other hand, recent studies have shown that shared gradients may be subject to gradient inversion attacks (GIA) on images or texts. However, so far there has not been any systematic study of the gradient inversion attacks in spatiotemporal federated learning. In this paper, we explore the gradient attack problem in spatiotemporal federated learning from attack and defense perspectives. To understand privacy risks in spatiotemporal federated learning, we first propose Spatiotemporal Gradient Inversion Attack (ST-GIA), a gradient attack algorithm tailored to spatiotemporal data that successfully reconstructs the original location from gradients. Furthermore, we design an adaptive defense strategy to mitigate gradient inversion attacks in spatiotemporal federated learning. By dynamically adjusting the perturbation levels, we can offer tailored protection for varying rounds of training data, thereby achieving a better trade-off between privacy and utility than current state-of-the-art methods. Through intensive experimental analysis on three real-world datasets, we reveal that the proposed defense strategy can well preserve the utility of spatiotemporal federated learning with effective security protection.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Extracting Spatiotemporal Data from Gradients with Large Language Models [30.785476975412482]
Recent updates that can be updated from gradient data break key privacy promise of federated learning.
We propose an adaptive defense strategy to mitigate attacks in federated learning.
We show that the proposed defense strategy can well preserve the utility of thetemporal-temporal federated learning with effective security protection.
arXiv Detail & Related papers (2024-10-21T15:48:34Z) - Gradients Stand-in for Defending Deep Leakage in Federated Learning [0.0]
This study introduces a novel, efficacious method aimed at safeguarding against gradient leakage, namely, AdaDefense"
This proposed approach not only effectively prevents gradient leakage, but also ensures that the overall performance of the model remains largely unaffected.
arXiv Detail & Related papers (2024-10-11T11:44:13Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning [31.374376311614675]
Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
arXiv Detail & Related papers (2022-10-19T20:41:30Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.