Membership Information Leakage in Federated Contrastive Learning
- URL: http://arxiv.org/abs/2404.16850v1
- Date: Wed, 6 Mar 2024 19:53:25 GMT
- Title: Membership Information Leakage in Federated Contrastive Learning
- Authors: Kongyang Chen, Wenfeng Wang, Zixin Wang, Wangjun Zhang, Zhipeng Li, Yao Huang,
- Abstract summary: Federated Contrastive Learning (FCL) represents a burgeoning approach for learning from decentralized unlabeled data.
FCL is susceptible to privacy risks, such as membership information leakage, stemming from its distributed nature.
This study delves into the feasibility of executing a membership inference attack on FCL and proposes a robust attack methodology.
- Score: 7.822625013699216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Contrastive Learning (FCL) represents a burgeoning approach for learning from decentralized unlabeled data while upholding data privacy. In FCL, participant clients collaborate in learning a global encoder using unlabeled data, which can serve as a versatile feature extractor for diverse downstream tasks. Nonetheless, FCL is susceptible to privacy risks, such as membership information leakage, stemming from its distributed nature, an aspect often overlooked in current solutions. This study delves into the feasibility of executing a membership inference attack on FCL and proposes a robust attack methodology. The attacker's objective is to determine if the data signifies training member data by accessing the model's inference output. Specifically, we concentrate on attackers situated within a client framework, lacking the capability to manipulate server-side aggregation methods or discern the training status of other clients. We introduce two membership inference attacks tailored for FCL: the \textit{passive membership inference attack} and the \textit{active membership inference attack}, contingent on the attacker's involvement in local model training. Experimental findings across diverse datasets validate the effectiveness of our attacks and underscore the inherent privacy risks associated with the federated contrastive learning paradigm.
Related papers
- ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods [56.073335779595475]
We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA)
ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context.
We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset.
arXiv Detail & Related papers (2024-06-23T00:23:13Z) - Let's Focus: Focused Backdoor Attack against Federated Transfer Learning [12.68970864847173]
Federated Transfer Learning (FTL) is the most general variation of Federated Learning.
In this paper, we investigate this intriguing Federated Learning scenario to identify and exploit a vulnerability.
The proposed attack can be carried out by one of the clients during the Federated Learning phase of FTL.
arXiv Detail & Related papers (2024-04-30T10:11:44Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Active Membership Inference Attack under Local Differential Privacy in
Federated Learning [18.017082794703555]
Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection.
We propose a new active membership inference (AMI) attack carried out by a dishonest server in FL.
arXiv Detail & Related papers (2023-02-24T15:21:39Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.