AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning
- URL: http://arxiv.org/abs/2306.06825v1
- Date: Mon, 12 Jun 2023 02:25:44 GMT
- Title: AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning
- Authors: Ghada Almashaqbeh, Zahra Ghodsi
- Abstract summary: Federated learning enables users to collaboratively train a machine learning model over their private datasets.
Secure aggregation protocols are employed to mitigate information leakage about the local datasets.
This setup, however, still leaks the participation of a user in a training iteration, which can also be sensitive.
We introduce AnoFel, the first framework to support private and anonymous dynamic participation in federated learning.
- Score: 4.086517346598676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables users to collaboratively train a machine learning
model over their private datasets. Secure aggregation protocols are employed to
mitigate information leakage about the local datasets. This setup, however,
still leaks the participation of a user in a training iteration, which can also
be sensitive. Protecting user anonymity is even more challenging in dynamic
environments where users may (re)join or leave the training process at any
point of time. In this paper, we introduce AnoFel, the first framework to
support private and anonymous dynamic participation in federated learning.
AnoFel leverages several cryptographic primitives, the concept of anonymity
sets, differential privacy, and a public bulletin board to support anonymous
user registration, as well as unlinkable and confidential model updates
submission. Additionally, our system allows dynamic participation, where users
can join or leave at any time, without needing any recovery protocol or
interaction. To assess security, we formalize a notion for privacy and
anonymity in federated learning, and formally prove that AnoFel satisfies this
notion. To the best of our knowledge, our system is the first solution with
provable anonymity guarantees. To assess efficiency, we provide a concrete
implementation of AnoFel, and conduct experiments showing its ability to
support learning applications scaling to a large number of clients. For an
MNIST classification task with 512 clients, the client setup takes less than 3
sec, and a training iteration can be finished in 3.2 sec. We also compare our
system with prior work and demonstrate its practicality for contemporary
learning tasks.
Related papers
- Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Fingerprint Attack: Client De-Anonymization in Federated Learning [44.77305865061609]
Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another.
This paper seeks to examine whether such a defense is adequate to guarantee anonymity, by proposing a novel fingerprinting attack over gradients sent by the participants to the server.
arXiv Detail & Related papers (2023-09-12T11:10:30Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Warmup and Transfer Knowledge-Based Federated Learning Approach for IoT
Continuous Authentication [34.6454670154373]
We propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data.
Our experiments show a significant increase in user authentication accuracy while maintaining user privacy and data security.
arXiv Detail & Related papers (2022-11-10T15:51:04Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - PrivacyFL: A simulator for privacy-preserving and secure federated
learning [2.578242050187029]
Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model.
PrivacyFL is a privacy-preserving and secure federated learning simulator.
arXiv Detail & Related papers (2020-02-19T20:16:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.