Perfectly Accurate Membership Inference by a Dishonest Central Server in
Federated Learning
- URL: http://arxiv.org/abs/2203.16463v2
- Date: Thu, 9 Nov 2023 08:44:44 GMT
- Title: Perfectly Accurate Membership Inference by a Dishonest Central Server in
Federated Learning
- Authors: Georg Pichler and Marco Romanelli and Leonardo Rey Vega and Pablo
Piantanida
- Abstract summary: Federated Learning is expected to provide strong privacy guarantees.
We introduce a simple but still very effective membership inference attack algorithm.
Our method provides perfect accuracy in identifying one sample in a training set with thousands of samples.
- Score: 34.13555530204307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning is expected to provide strong privacy guarantees, as only
gradients or model parameters but no plain text training data is ever exchanged
either between the clients or between the clients and the central server. In
this paper, we challenge this claim by introducing a simple but still very
effective membership inference attack algorithm, which relies only on a single
training step. In contrast to the popular honest-but-curious model, we
investigate a framework with a dishonest central server. Our strategy is
applicable to models with ReLU activations and uses the properties of this
activation function to achieve perfect accuracy. Empirical evaluation on visual
classification tasks with MNIST, CIFAR10, CIFAR100 and CelebA datasets show
that our method provides perfect accuracy in identifying one sample in a
training set with thousands of samples. Occasional failures of our method lead
us to discover duplicate images in the CIFAR100 and CelebA datasets.
Related papers
- ConDa: Fast Federated Unlearning with Contribution Dampening [46.074452659791575]
ConDa is a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client.
We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data.
arXiv Detail & Related papers (2024-10-05T12:45:35Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model [32.094290282897894]
Federated learning aims to train a collective model from physically isolated clients while safeguarding the privacy of users' data.
We propose a novel lightweight unsupervised federated learning approach that leverages unlabeled data on each client to perform lightweight model training and communication.
Our proposed method greatly enhances model performance in comparison to CLIP's zero-shot predictions and even outperforms supervised federated learning benchmark methods.
arXiv Detail & Related papers (2024-04-17T03:42:48Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.