The Measure of Deception: An Analysis of Data Forging in Machine Unlearning
- URL: http://arxiv.org/abs/2509.05865v1
- Date: Sat, 06 Sep 2025 23:44:05 GMT
- Title: The Measure of Deception: An Analysis of Data Forging in Machine Unlearning
- Authors: Rishabh Dixit, Yuan Hui, Rayan Saab,
- Abstract summary: Key challenge in verifying unlearning is forging.<n>A key challenge in verifying unlearning is forging, adversarially crafting data that mimics bounds of a target point.<n>We show that adversarial forging is fundamentally limited and that false unlearning claims can, in principle, be detected.
- Score: 2.141079906482723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by privacy regulations and the need to mitigate the effects of harmful data, machine unlearning seeks to modify trained models so that they effectively ``forget'' designated data. A key challenge in verifying unlearning is forging -- adversarially crafting data that mimics the gradient of a target point, thereby creating the appearance of unlearning without actually removing information. To capture this phenomenon, we consider the collection of data points whose gradients approximate a target gradient within tolerance $\epsilon$ -- which we call an $\epsilon$-forging set -- and develop a framework for its analysis. For linear regression and one-layer neural networks, we show that the Lebesgue measure of this set is small. It scales on the order of $\epsilon$, and when $\epsilon$ is small enough, $\epsilon^d$. More generally, under mild regularity assumptions, we prove that the forging set measure decays as $\epsilon^{(d-r)/2}$, where $d$ is the data dimension and $r<d$ is the nullity of a variation matrix defined by the model gradients. Extensions to batch SGD and almost-everywhere smooth loss functions yield the same asymptotic scaling. In addition, we establish probability bounds showing that, under non-degenerate data distributions, the likelihood of randomly sampling a forging point is vanishingly small. These results provide evidence that adversarial forging is fundamentally limited and that false unlearning claims can, in principle, be detected.
Related papers
- Information Hidden in Gradients of Regression with Target Noise [2.8911861322232686]
We show that the gradients alone can reveal the Hessian.<n>We provide non-asymptotic operator-norm guarantees under sub-Gaussian inputs.
arXiv Detail & Related papers (2026-01-26T14:50:16Z) - Learning Shrinks the Hard Tail: Training-Dependent Inference Scaling in a Solvable Linear Model [2.7074235008521246]
We analyze neural scaling laws in a solvable model of last-layer fine-tuning where targets have intrinsic, instance-heterogeneous difficulty.<n>We show that learning shrinks the hard tail'' of the error distribution.
arXiv Detail & Related papers (2026-01-07T10:00:17Z) - Phase-space entropy at acquisition reflects downstream learnability [54.4100065023873]
We propose an acquisition-level scalar $S_mathcal B$ based on instrument-resolved phase space.<n>We show theoretically that (S_mathcal B) correctly identifies the phase-space coherence of periodic sampling.<n>$|S_mathcal B|$ consistently ranks sampling geometries and predicts downstream reconstruction/recognition difficulty emphwithout training.
arXiv Detail & Related papers (2025-12-22T10:03:51Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.<n>We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.<n>Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Causality Pursuit from Heterogeneous Environments via Neural Adversarial Invariance Learning [12.947265104477237]
Pursuing causality from data is a fundamental problem in scientific discovery, treatment intervention, and transfer learning.<n>This paper introduces a novel algorithmic method for addressing nonparametric invariance and causality learning in regression models across multiple environments.<n>The proposed Focused Adrial Invariant Regularization framework utilizes an innovative minimax optimization approach that drives regression models toward prediction-invariant solutions through adversarial testing.
arXiv Detail & Related papers (2024-05-07T23:37:40Z) - Certified Minimax Unlearning with Generalization Rates and Deletion Capacity [28.998771902033003]
We study the problem of $(epsilon,delta)$-certified machine unlearning for minimax models.
We develop a new minimax unlearning step of a total-Hessian-based complete update.
arXiv Detail & Related papers (2023-12-16T06:03:23Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - On the Generalization for Transfer Learning: An Information-Theoretic Analysis [8.102199960821165]
We give an information-theoretic analysis of the generalization error and excess risk of transfer learning algorithms.
Our results suggest, perhaps as expected, that the Kullback-Leibler divergenceD(mu|mu')$ plays an important role in the characterizations.
We then generalize the mutual information bound with other divergences such as $phi$-divergence and Wasserstein distance.
arXiv Detail & Related papers (2022-07-12T08:20:41Z) - A Statistical Learning View of Simple Kriging [0.0]
We analyze the simple Kriging task from a statistical learning perspective.
The goal is to predict the unknown values it takes at any other location with minimum quadratic risk.
We prove non-asymptotic bounds of order $O_mathbbP (1/sqrtn)$ for the excess risk of a plug-in predictive rule mimicking the true minimizer.
arXiv Detail & Related papers (2022-02-15T12:46:43Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z) - A Neural Scaling Law from the Dimension of the Data Manifold [8.656787568717252]
When data is plentiful, the loss achieved by well-trained neural networks scales as a power-law $L propto N-alpha$ in the number of network parameters $N$.
The scaling law can be explained if neural models are effectively just performing regression on a data manifold of intrinsic dimension $d$.
This simple theory predicts that the scaling exponents $alpha approx 4/d$ for cross-entropy and mean-squared error losses.
arXiv Detail & Related papers (2020-04-22T19:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.