Sequential Subspace Noise Injection Prevents Accuracy Collapse in Certified Unlearning
- URL: http://arxiv.org/abs/2601.05134v1
- Date: Thu, 08 Jan 2026 17:23:13 GMT
- Title: Sequential Subspace Noise Injection Prevents Accuracy Collapse in Certified Unlearning
- Authors: Polina Dolgova, Sebastian U. Stich,
- Abstract summary: Certified unlearning based on differential privacy offers strong guarantees but remains largely impractical.<n>We propose sequential noise scheduling, which distributes the noise budget across subspaces of the parameter space.<n>We extend the analysis of noisy fine-tuning to the subspace setting, proving that the same $(varepsilon,)$ privacy budget is retained.
- Score: 28.628342735283752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Certified unlearning based on differential privacy offers strong guarantees but remains largely impractical: the noisy fine-tuning approaches proposed so far achieve these guarantees but severely reduce model accuracy. We propose sequential noise scheduling, which distributes the noise budget across orthogonal subspaces of the parameter space, rather than injecting it all at once. This simple modification mitigates the destructive effect of noise while preserving the original certification guarantees. We extend the analysis of noisy fine-tuning to the subspace setting, proving that the same $(\varepsilon,δ)$ privacy budget is retained. Empirical results on image classification benchmarks show that our approach substantially improves accuracy after unlearning while remaining robust to membership inference attacks. These results show that certified unlearning can achieve both rigorous guarantees and practical utility.
Related papers
- LoRA and Privacy: When Random Projections Help (and When They Don't) [55.65932772290123]
We introduce the (Wishart) projection mechanism, a randomized map of the form $S mapsto M f(S)$ with $M sim W_d (1/r I_d, r)$ and study its differential privacy properties.<n>For vector-valued queries $f$, we prove non-asymptotic DP guarantees without any additive noise, showing that Wishart randomness alone can suffice.<n>For matrix-valued queries, however, we establish a sharp negative result: in the noise-free setting, the mechanism is not DP, and we demonstrate its vulnerability.
arXiv Detail & Related papers (2026-01-29T13:43:37Z) - Exploring the Noise Robustness of Online Conformal Prediction [16.623599788608185]
We investigate the robustness of online conformal prediction under uniform label noise with a known noise rate.<n>We propose Noise Robust Online Conformal Prediction (dubbed NR-OCP) by updating the threshold with a novel robust pinball loss.<n>Our theoretical analysis shows that NR-OCP eliminates the coverage gap in both constant and dynamic learning rate schedules.
arXiv Detail & Related papers (2025-01-30T14:08:26Z) - Robust Representation Consistency Model via Contrastive Denoising [83.47584074390842]
randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations.<n> diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples.<n>We reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space.
arXiv Detail & Related papers (2025-01-22T18:52:06Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - Certified Adversarial Robustness Within Multiple Perturbation Bounds [38.3813286696956]
Randomized smoothing (RS) is a well known certified defense against adversarial attacks.
In this work, we aim to improve the certified adversarial robustness against multiple perturbation bounds simultaneously.
arXiv Detail & Related papers (2023-04-20T16:42:44Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix
Recovery [8.581512812219737]
We propose a new framework for low-rank matrix recovery from observations corrupted with noise and outliers.
Inspired by the square root Lasso, this new formulation does not require prior knowledge of the noise level.
We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.
arXiv Detail & Related papers (2021-06-17T02:28:11Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.