A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
- URL: http://arxiv.org/abs/2206.07284v1
- Date: Wed, 15 Jun 2022 03:52:51 GMT
- Title: A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
- Authors: Rui Zhang, Song Guo, Junxiao Wang, Xin Xie, Dacheng Tao
- Abstract summary: We present a comprehensive survey on GradInv, aiming to summarize the cutting-edge research and broaden the horizons for different domains.
Firstly, we propose a taxonomy of GradInv attacks by characterizing existing attacks into two paradigms: iteration- and recursion-based attacks.
Second, we summarize emerging defense strategies against GradInv attacks. We find these approaches focus on three perspectives covering data obscuration, model improvement and gradient protection.
- Score: 81.46745643749513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that the training samples can be recovered from
gradients, which are called Gradient Inversion (GradInv) attacks. However,
there remains a lack of extensive surveys covering recent advances and thorough
analysis of this issue. In this paper, we present a comprehensive survey on
GradInv, aiming to summarize the cutting-edge research and broaden the horizons
for different domains. Firstly, we propose a taxonomy of GradInv attacks by
characterizing existing attacks into two paradigms: iteration- and
recursion-based attacks. In particular, we dig out some critical ingredients
from the iteration-based attacks, including data initialization, model training
and gradient matching. Second, we summarize emerging defense strategies against
GradInv attacks. We find these approaches focus on three perspectives covering
data obscuration, model improvement and gradient protection. Finally, we
discuss some promising directions and open problems for further research.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy [10.962424750173332]
Federated Learning (FL) has emerged as a leading paradigm for decentralized, privacy preserving machine learning training.
Recent research on gradient inversion attacks (GIAs) have shown that gradient updates in FL can leak information on private training samples.
We present a survey and novel taxonomy of GIAs that emphasize FL threat models, particularly that of malicious servers and clients.
arXiv Detail & Related papers (2024-05-16T18:15:38Z) - A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective [8.941193384980147]
We focus on threat models targeting the learning process of FL systems.
Defense strategies have evolved from using a singular metric to excluding malicious clients.
Recent endeavors subtly alter the least significant weights in local models to bypass defense measures.
arXiv Detail & Related papers (2023-11-27T18:32:08Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - Are Defenses for Graph Neural Networks Robust? [72.1389952286628]
We show that most Graph Neural Networks (GNNs) defenses show no or only marginal improvement compared to an undefended baseline.
We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
Our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
arXiv Detail & Related papers (2023-01-31T15:11:48Z) - Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power
of Geometric Transformations [49.06194223213629]
Black-box adversarial attacks against video classification models have been largely understudied.
In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space.
Our algorithm inherently leads to successful perturbations with surprisingly few queries.
arXiv Detail & Related papers (2021-10-05T05:05:59Z) - Byzantine-robust Federated Learning through Collaborative Malicious
Gradient Filtering [32.904425716385575]
We show that element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks.
We propose a novel approach called textitSignGuard to enable Byzantine-robust federated learning through collaborative malicious gradient filtering.
arXiv Detail & Related papers (2021-09-13T11:15:15Z) - Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial
Robustness [75.30116479840619]
In this paper, we identify a more subtle situation called Imbalanced Gradients that can also cause overestimated adversarial robustness.
The phenomenon of imbalanced gradients occurs when the gradient of one term of the margin loss dominates and pushes the attack towards a suboptimal direction.
We propose a Margin Decomposition (MD) attack that decomposes a margin loss into individual terms and then explores the attackability of these terms separately.
arXiv Detail & Related papers (2020-06-24T13:41:37Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.