Label Inference Attacks against Federated Unlearning
- URL: http://arxiv.org/abs/2508.06789v1
- Date: Sat, 09 Aug 2025 02:38:24 GMT
- Title: Label Inference Attacks against Federated Unlearning
- Authors: Wei Wang, Xiangyun Tang, Yajie Wang, Yijing Lin, Tao Zhang, Meng Shen, Dusit Niyato, Liehuang Zhu,
- Abstract summary: Federated Unlearning (FU) has emerged as a promising solution to respond to the right to be forgotten of clients.<n>We introduce and analyze a new privacy threat against FU and propose a novel label inference attack, ULIA.
- Score: 52.102277522089814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Unlearning (FU) has emerged as a promising solution to respond to the right to be forgotten of clients, by allowing clients to erase their data from global models without compromising model performance. Unfortunately, researchers find that the parameter variations of models induced by FU expose clients' data information, enabling attackers to infer the label of unlearning data, while label inference attacks against FU remain unexplored. In this paper, we introduce and analyze a new privacy threat against FU and propose a novel label inference attack, ULIA, which can infer unlearning data labels across three FU levels. To address the unique challenges of inferring labels via the models variations, we design a gradient-label mapping mechanism in ULIA that establishes a relationship between gradient variations and unlearning labels, enabling inferring labels on accumulated model variations. We evaluate ULIA on both IID and non-IID settings. Experimental results show that in the IID setting, ULIA achieves a 100% Attack Success Rate (ASR) under both class-level and client-level unlearning. Even when only 1% of a user's local data is forgotten, ULIA still attains an ASR ranging from 93% to 62.3%.
Related papers
- Semi-Supervised Federated Learning via Dual Contrastive Learning and Soft Labeling for Intelligent Fault Diagnosis [30.60728200709919]
This paper proposes a semi-supervised federated learning framework, SSFL-DCSL.<n>It integrates dual contrastive loss and soft labeling to address data and label scarcity for distributed clients.<n>It can improve accuracy by 1.15% to 7.85% over state-of-the-art methods.
arXiv Detail & Related papers (2025-07-12T10:54:23Z) - Whispers of Data: Unveiling Label Distributions in Federated Learning Through Virtual Client Simulation [4.81392127803963]
Federated Learning enables collaborative training of a global model across multiple geographically dispersed clients without the need for data sharing.<n>It is susceptible to inference attacks, particularly label inference attacks.<n>We propose a novel label distribution inference attack that is stable and adaptable to various scenarios.
arXiv Detail & Related papers (2025-04-30T08:51:06Z) - Erasing Without Remembering: Implicit Knowledge Forgetting in Large Language Models [81.62767292169225]
We investigate knowledge forgetting in large language models with a focus on its generalisation.<n>We propose PerMU, a novel probability perturbation-based unlearning paradigm.<n>Experiments are conducted on a diverse range of datasets, including TOFU, Harry Potter, ZsRE, WMDP, and MUSE.
arXiv Detail & Related papers (2025-02-27T11:03:33Z) - FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks [17.74292717846646]
We introduce a novel malicious unlearning attack dubbed FedMUA.<n>We show that FedMUA can induce misclassification on target samples and can achieve an 80% attack success rate.
arXiv Detail & Related papers (2025-01-21T03:07:03Z) - Learning Unlabeled Clients Divergence for Federated Semi-Supervised Learning via Anchor Model Aggregation [10.282711631100845]
SemiAnAgg learns unlabeled client contributions via an anchor model.
SemiAnAgg achieves new state-of-the-art results on four widely used FedSemi benchmarks.
arXiv Detail & Related papers (2024-07-14T20:50:40Z) - Navigating Data Heterogeneity in Federated Learning A Semi-Supervised
Federated Object Detection [3.7398615061365206]
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources.
It faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving.
We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data.
arXiv Detail & Related papers (2023-10-26T01:40:28Z) - Federated Semi-Supervised Learning with Annotation Heterogeneity [57.12560313403097]
We propose a novel framework called Heterogeneously Annotated Semi-Supervised LEarning (HASSLE)
It is a dual-model framework with two models trained separately on labeled and unlabeled data.
The dual models can implicitly learn from both types of data across different clients, although each dual model is only trained locally on a single type of data.
arXiv Detail & Related papers (2023-03-04T16:04:49Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.