Fully Decentralized Certified Unlearning
- URL: http://arxiv.org/abs/2512.08443v1
- Date: Tue, 09 Dec 2025 10:15:15 GMT
- Title: Fully Decentralized Certified Unlearning
- Authors: Hithem Lamri, Michail Maniatakos,
- Abstract summary: Machine unlearning (MUNIST) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning.<n>While certified unlearning has been analyzed in centralized and federated settings (via guarantees of differential privacy, DP), decentralized setting -- peers communicate without a coordinator underexplored.
- Score: 4.944495309580904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine unlearning (MU) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning. While certified unlearning has been analyzed in centralized and server-orchestrated federated settings (via guarantees analogous to differential privacy, DP), the decentralized setting -- where peers communicate without a coordinator remains underexplored. We study certified unlearning in decentralized networks with fixed topologies and propose RR-DU, a random-walk procedure that performs one projected gradient ascent step on the forget set at the unlearning client and a geometrically distributed number of projected descent steps on the retained data elsewhere, combined with subsampled Gaussian noise and projection onto a trust region around the original model. We provide (i) convergence guarantees in the convex case and stationarity guarantees in the nonconvex case, (ii) $(\varepsilon,δ)$ network-unlearning certificates on client views via subsampled Gaussian $Rényi$ DP (RDP) with segment-level subsampling, and (iii) deletion-capacity bounds that scale with the forget-to-local data ratio and quantify the effect of decentralization (network mixing and randomized subsampling) on the privacy--utility trade-off. Empirically, on image benchmarks (MNIST, CIFAR-10), RR-DU matches a given $(\varepsilon,δ)$ while achieving higher test accuracy than decentralized DP baselines and reducing forget accuracy to random guessing ($\approx 10\%$).
Related papers
- DP-CSGP: Differentially Private Stochastic Gradient Push with Compressed Communication [71.60998478544028]
We propose Differentially Private Gradient Push with Compressed communication (termedfrac-CSGP) for decentralized learning graphs.<n>For general non-math and smooth objective functions, we show that our algorithm is designed to maintain high accuracy and efficient communication.
arXiv Detail & Related papers (2025-12-15T17:37:02Z) - Differentially Private Decentralized Dataset Synthesis Through Randomized Mixing with Correlated Noise [0.0]
We explore differentially private synthetic data generation in a decentralized-data setting.<n>We build on the recently proposed Differentially Private Class-Centric Data Aggregation.
arXiv Detail & Related papers (2025-09-12T16:18:35Z) - Decentralized Differentially Private Power Method [4.58112062523768]
We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings.<n>Our method ensures $(epsilon,delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network.<n> Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches.
arXiv Detail & Related papers (2025-07-30T17:15:50Z) - Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models [6.790905400046194]
We present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise.<n>Our main contribution is a new analytical framework that accurately characterizes privacy leakage across these scenarios.<n>We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches.
arXiv Detail & Related papers (2025-05-26T13:31:43Z) - Secure Aggregation Meets Sparsification in Decentralized Learning [1.7010199949406575]
This paper introduces CESAR, a novel secure aggregation protocol for Decentralized Learning (DL)
CESAR provably defends against honest-but-curious adversaries and can be formally adapted to counteract collusion between them.
arXiv Detail & Related papers (2024-05-13T12:52:58Z) - Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees [18.24213566328972]
Decentralized learning computation (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are carried out by the clients without a central server.<n>$textttDSpodFL$, a DFL methodology built on a generalized notion of $textitsporadicity$ in both local gradient and aggregation processes.<n>$textttDSpodFL$ consistently achieves improved speeds compared with baselines under various system settings.
arXiv Detail & Related papers (2024-02-05T19:02:19Z) - Unsupervised Deep Probabilistic Approach for Partial Point Cloud
Registration [74.53755415380171]
Deep point cloud registration methods face challenges to partial overlaps and rely on labeled data.
We propose UDPReg, an unsupervised deep probabilistic registration framework for point clouds with partial overlaps.
Our UDPReg achieves competitive performance on the 3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks.
arXiv Detail & Related papers (2023-03-23T14:18:06Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - A(DP)$^2$SGD: Asynchronous Decentralized Parallel Stochastic Gradient
Descent with Differential Privacy [15.038697541988746]
A popular distributed learning strategy is federated learning, where there is a central server storing the global model and a set of local computing nodes updating the model parameters with their corresponding data.
In this paper, we present a differentially private version of asynchronous decentralized parallel SGD framework, or A(DP)$2$SGD for short, which maintains communication efficiency of ADPSGD and prevents the inference from malicious participants.
arXiv Detail & Related papers (2020-08-21T00:56:22Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.