$f$-FUM: Federated Unlearning via min--max and $f$-divergence
- URL: http://arxiv.org/abs/2602.06187v1
- Date: Thu, 05 Feb 2026 20:50:02 GMT
- Title: $f$-FUM: Federated Unlearning via min--max and $f$-divergence
- Authors: Radmehr Karimian, Amirhossein Bagheri, Meghdad Kurmanji, Nicholas D. Lane, Gholamali Aminian,
- Abstract summary: We propose a novel federated unlearning framework formulated as a min-max optimization problem.<n>We show that our method achieves significant speedups over naive retraining, with minimal impact on utility.
- Score: 14.929179934474497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has emerged as a powerful paradigm for collaborative machine learning across decentralized data sources, preserving privacy by keeping data local. However, increasing legal and ethical demands, such as the "right to be forgotten", and the need to mitigate data poisoning attacks have underscored the urgent necessity for principled data unlearning in FL. Unlike centralized settings, the distributed nature of FL complicates the removal of individual data contributions. In this paper, we propose a novel federated unlearning framework formulated as a min-max optimization problem, where the objective is to maximize an $f$-divergence between the model trained with all data and the model retrained without specific data points, while minimizing the degradation on retained data. Our framework could act like a plugin and be added to almost any federated setup, unlike SOTA methods like (\cite{10269017} which requires model degradation in server, or \cite{khalil2025notfederatedunlearningweight} which requires to involve model architecture and model weights). This formulation allows for efficient approximation of data removal effects in a federated setting. We provide empirical evaluations to show that our method achieves significant speedups over naive retraining, with minimal impact on utility.
Related papers
- Tackling Federated Unlearning as a Parameter Estimation Problem [2.9085589574462816]
This work introduces an efficient Federated Unlearning framework based on information theory.<n>Our method uses second-order Hessian information to identify and selectively reset only the parameters most sensitive to the data being forgotten.
arXiv Detail & Related papers (2025-08-26T14:24:45Z) - One-shot Federated Learning via Synthetic Distiller-Distillate Communication [63.89557765137003]
One-shot Federated learning (FL) is a powerful technology facilitating collaborative training of machine learning models in a single round of communication.<n>We propose FedSD2C, a novel and practical one-shot FL framework designed to address these challenges.
arXiv Detail & Related papers (2024-12-06T17:05:34Z) - FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher [8.592348476398149]
Federated Learning (FL) systems enable the collaborative training of machine learning models without requiring centralized collection of individual data.<n>We propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the data to forget from an FL global model.
arXiv Detail & Related papers (2024-08-14T14:36:28Z) - FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - Communication Efficient and Provable Federated Unlearning [43.178460522012934]
We study federated unlearning, a novel problem to eliminate the impact of specific clients or data points on the global model learned via federated learning (FL)
This problem is driven by the right to be forgotten and the privacy challenges in FL.
We introduce a new framework for exact federated unlearning that meets two essential criteria: textitcommunication efficiency and textitexact unlearning provability.
arXiv Detail & Related papers (2024-01-19T20:35:02Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Unlearning [24.60965999954735]
Federated learning (FL) has emerged as a promising distributed machine learning (ML) paradigm.
Practical needs of the "right to be forgotten" and countering data poisoning attacks call for efficient techniques that can remove, or unlearn, specific training data from the trained FL model.
We present FedEraser, the first federated unlearning methodology that can eliminate the influence of a federated client's data on the global FL model.
arXiv Detail & Related papers (2020-12-27T08:54:37Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.