From Mean to Extreme: Formal Differential Privacy Bounds on the Success of Real-World Data Reconstruction Attacks
- URL: http://arxiv.org/abs/2402.12861v2
- Date: Mon, 29 Sep 2025 08:05:04 GMT
- Title: From Mean to Extreme: Formal Differential Privacy Bounds on the Success of Real-World Data Reconstruction Attacks
- Authors: Anneliese Riess, Kristian Schwethelm, Johannes Kaiser, Tamara T. Mueller, Julia A. Schnabel, Daniel Rueckert, Alexander Ziller,
- Abstract summary: Differential Privacy in machine learning is often interpreted as guarantees against membership inference.<n> translating DP budgets into quantitative protection against the more damaging threat of data reconstruction remains a challenging open problem.<n>This paper bridges the critical gap by deriving the first formal privacy bounds tailored to the mechanics of demonstrated "from-scratch" attacks.
- Score: 54.25638567385662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The gold standard for privacy in machine learning, Differential Privacy (DP), is often interpreted through its guarantees against membership inference. However, translating DP budgets into quantitative protection against the more damaging threat of data reconstruction remains a challenging open problem. Existing theoretical analyses of reconstruction risk are typically based on an "identification" threat model, where an adversary with a candidate set seeks a perfect match. When applied to the realistic threat of "from-scratch" attacks, these bounds can lead to an inefficient privacy-utility trade-off. This paper bridges this critical gap by deriving the first formal privacy bounds tailored to the mechanics of demonstrated Analytic Gradient Inversion Attacks (AGIAs). We first formalize the optimal from-scratch attack strategy for an adversary with no prior knowledge, showing it reduces to a mean estimation problem. We then derive closed-form, probabilistic bounds on this adversary's success, measured by Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR). Our empirical evaluation confirms these bounds remain tight even when the attack is concealed within large, complex network architectures. Our work provides a crucial second anchor for risk assessment. By establishing a tight, worst-case bound for the from-scratch threat model, we enable practitioners to assess a "risk corridor" bounded by the identification-based worst case on one side and our from-scratch worst case on the other. This allows for a more holistic, context-aware judgment of privacy risk, empowering practitioners to move beyond abstract budgets toward a principled reasoning framework for calibrating the privacy of their models.
Related papers
- The Eminence in Shadow: Exploiting Feature Boundary Ambiguity for Robust Backdoor Attacks [51.468144272905135]
Deep neural networks (DNNs) underpin critical applications yet remain vulnerable to backdoor attacks.<n>We provide a theoretical analysis targeting backdoor attacks, focusing on how sparse decision boundaries enable disproportionate model manipulation.<n>We propose Eminence, an explainable and robust black-box backdoor framework with provable theoretical guarantees and inherent stealth properties.
arXiv Detail & Related papers (2025-12-11T08:09:07Z) - On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - Beyond the Worst Case: Extending Differential Privacy Guarantees to Realistic Adversaries [17.780319275883127]
Differential Privacy is a family of definitions that bound the worst-case privacy leakage of a mechanism.<n>This work sheds light on what the worst-case guarantee of DP implies about the success of attackers that are more representative of real-world privacy risks.
arXiv Detail & Related papers (2025-07-10T20:36:31Z) - Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy [24.723577119566112]
We show that bounds on attack success can take the same unified form across re-identification, attribute inference, and data reconstruction risks.<n>Our results are tighter than prior methods using $varepsilon$-DP, R'enyi DP, and concentrated DP.
arXiv Detail & Related papers (2025-07-09T15:59:30Z) - Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention [65.47632669243657]
A dishonest institution can exploit mechanisms to discriminate or unjustly deny services under the guise of uncertainty.<n>We demonstrate the practicality of this threat by introducing an uncertainty-inducing attack called Mirage.<n>We propose Confidential Guardian, a framework that analyzes calibration metrics on a reference dataset to detect artificially suppressed confidence.
arXiv Detail & Related papers (2025-05-29T19:47:50Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
We propose a game-theoretic framework modeling privacy protection as a Bayesian game between defender and attacker.<n>To address strategic complexity, we represent the defender's mixed strategy as a neural network generator mapping private datasets to public representations.<n>Our approach significantly outperforms state-of-the-art methods by generating stronger attacks and achieving better privacy-utility tradeoffs.
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Order of Magnitude Speedups for LLM Membership Inference [5.124111136127848]
Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose privacy vulnerabilities.
One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs)
We propose a low-cost MIA that leverages an ensemble of small quantile regression models to determine if a document belongs to the model's training set or not.
arXiv Detail & Related papers (2024-09-22T16:18:14Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - ATTAXONOMY: Unpacking Differential Privacy Guarantees Against Practical Adversaries [11.550822252074733]
We offer a detailed taxonomy of attacks, showing the various dimensions of attacks and highlighting that many real-world settings have been understudied.
We operationalize our taxonomy by using it to analyze a real-world case study, the Israeli Ministry of Health's recent release of a birth dataset using Differential Privacy.
arXiv Detail & Related papers (2024-05-02T20:23:23Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
Learning [77.27443885999404]
Federated Learning (FL) is a setting for training machine learning models in distributed environments.
We propose a novel method, CANIFE, that uses carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round.
arXiv Detail & Related papers (2022-10-06T13:30:16Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Beyond Gradients: Exploiting Adversarial Priors in Model Inversion
Attacks [7.49320945341034]
Collaborative machine learning settings can be susceptible to adversarial interference and attacks.
One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model to extract representations.
We propose a novel model inversion framework that builds on the foundations of gradient-based model inversion attacks.
arXiv Detail & Related papers (2022-03-01T14:22:29Z) - Reconstructing Training Data with Informed Adversaries [30.138217209991826]
Given access to a machine learning model, can an adversary reconstruct the model's training data?
This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one.
We show it is feasible to reconstruct the remaining data point in this stringent threat model.
arXiv Detail & Related papers (2022-01-13T09:19:25Z) - Revisiting Design Choices in Model-Based Offline Reinforcement Learning [39.01805509055988]
Offline reinforcement learning enables agents to leverage large pre-collected datasets of environment transitions to learn control policies.
This paper compares and designs novel protocols to investigate their interaction with other hyper parameters, such as the number of models, or imaginary rollout horizon.
arXiv Detail & Related papers (2021-10-08T13:51:34Z) - Training Meta-Surrogate Model for Transferable Adversarial Attack [98.13178217557193]
We consider adversarial attacks to a black-box model when no queries are allowed.
In this setting, many methods directly attack surrogate models and transfer the obtained adversarial examples to fool the target model.
We show we can obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models.
arXiv Detail & Related papers (2021-09-05T03:27:46Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Systematic Evaluation of Privacy Risks of Machine Learning Models [41.017707772150835]
We show that prior work on membership inference attacks may severely underestimate the privacy risks.
We first propose to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks.
We then introduce a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score.
arXiv Detail & Related papers (2020-03-24T00:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.