Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
- URL: http://arxiv.org/abs/2511.10423v2
- Date: Sun, 16 Nov 2025 12:37:26 GMT
- Title: Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
- Authors: Jiayang Meng, Tao Huang, Hong Chen, Chen Hou, Guolong Zheng,
- Abstract summary: Federated learning synchronizes models through gradient transmission and aggregation.<n>These gradients pose significant privacy risks, as sensitive training data is embedded within them.<n>Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise.
- Score: 26.493235454865538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce gradient-guided conditional diffusion models for reconstructing private images from leaked gradients, without prior knowledge of the target data distribution. Our approach leverages the inherent denoising capability of diffusion models to circumvent the partial protection offered by noise perturbation, thereby improving attack performance under such defenses. We further provide a theoretical analysis of the reconstruction error bounds and the convergence properties of the attack loss, characterizing the impact of key factors-such as noise magnitude and attacked model architecture-on reconstruction quality. Extensive experiments demonstrate our attack's superior reconstruction performance with Gaussian noise-perturbed gradients, and confirm our theoretical findings.
Related papers
- Perception-based Image Denoising via Generative Compression [5.85669274676101]
Image denoising aims to remove noise while preserving structural details and perceptual realism.<n> distortion-driven methods often produce over-smoothed reconstructions, especially under strong noise and distribution shift.<n>This paper proposes a generative compression framework for perception-based denoising.
arXiv Detail & Related papers (2026-02-12T04:21:26Z) - Generative Model Inversion Through the Lens of the Manifold Hypothesis [98.37040155914595]
Model inversion attacks (MIAs) aim to reconstruct class-representative samples from trained models.<n>Recent generative MIAs utilize generative adversarial networks to learn image priors that guide the inversion process.
arXiv Detail & Related papers (2025-09-24T14:39:25Z) - Active Adversarial Noise Suppression for Image Forgery Localization [56.98050814363447]
We introduce an Adversarial Noise Suppression Module (ANSM) that generate a defensive perturbation to suppress the attack effect of adversarial noise.<n>To our best knowledge, this is the first report of adversarial defense in image forgery localization tasks.
arXiv Detail & Related papers (2025-06-15T14:53:27Z) - Optimal Defenses Against Gradient Reconstruction Attacks [13.728704430883987]
Federated Learning (FL) is designed to prevent data leakage through collaborative model training without centralized data storage.
It remains vulnerable to gradient reconstruction attacks that recover original training data from shared gradients.
arXiv Detail & Related papers (2024-11-06T08:22:20Z) - Gradient-Guided Conditional Diffusion Models for Private Image Reconstruction: Analyzing Adversarial Impacts of Differential Privacy and Denoising [21.30726250408398]
Current gradient-based reconstruction methods struggle with high-resolution images due to computational complexity and prior knowledge requirements.
We propose two novel methods that require minimal modifications to the diffusion model's generation process and eliminate the need for prior knowledge.
We conduct a comprehensive theoretical analysis of the impact of differential privacy noise on the quality of reconstructed images, revealing the relationship among noise magnitude, the architecture of attacked models, and the attacker's reconstruction capability.
arXiv Detail & Related papers (2024-11-05T12:39:21Z) - Mjolnir: Breaking the Shield of Perturbation-Protected Gradients via Adaptive Diffusion [13.764770382623812]
We present the first attempt to break the shield of gradient perturbation protection in Federated Learning.<n>We introduce Mjolnir, a perturbation-resilient gradient leakage attack.<n>Mjolnir is capable of removing perturbations from gradients without requiring additional access to the original model structure or external data.
arXiv Detail & Related papers (2024-07-07T07:06:49Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study [6.766058964358335]
Common defense mechanisms such as Differential Privacy (DP) or Privacy Modules (PMs) introduce randomness during computation to prevent such attacks.<n>This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms.
arXiv Detail & Related papers (2022-08-09T13:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.