RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
- URL: http://arxiv.org/abs/2310.19163v2
- Date: Thu, 11 Jul 2024 16:04:27 GMT
- Title: RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
- Authors: Dzung Pham, Shreyas Kulkarni, Amir Houmansadr,
- Abstract summary: We show that users face an elevated risk of having their private interactions reconstructed by the central server.
We introduce RAIFLE, a novel optimization-based attack framework.
Our experiments with federated recommendation and online learning-to-rank scenarios demonstrate that RAIFLE is significantly more powerful than existing reconstruction attacks.
- Score: 14.394939014120451
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning has emerged as a promising privacy-preserving solution for machine learning domains that rely on user interactions, particularly recommender systems and online learning to rank. While there has been substantial research on the privacy of traditional federated learning, little attention has been paid to the privacy properties of these interaction-based settings. In this work, we show that users face an elevated risk of having their private interactions reconstructed by the central server when the server can control the training features of the items that users interact with. We introduce RAIFLE, a novel optimization-based attack framework where the server actively manipulates the features of the items presented to users to increase the success rate of reconstruction. Our experiments with federated recommendation and online learning-to-rank scenarios demonstrate that RAIFLE is significantly more powerful than existing reconstruction attacks like gradient inversion, achieving high performance consistently in most settings. We discuss the pros and cons of several possible countermeasures to defend against RAIFLE in the context of interaction-based federated learning. Our code is open-sourced at https://github.com/dzungvpham/raifle.
Related papers
- A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning [14.110303634976272]
Split Learning (SL) is a distributed learning framework renowned for its privacy-preserving features and minimal computational requirements.
Previous research consistently highlights the potential privacy breaches in SL systems by server adversaries reconstructing training data.
This paper introduces a new semi-honest Data Reconstruction Attack on SL, named Feature-Oriented Reconstruction Attack (FORA)
arXiv Detail & Related papers (2024-05-07T08:38:35Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Aggregation Service for Federated Learning: An Efficient, Secure, and
More Resilient Realization [22.61730495802799]
We present a system design which offers efficient protection of individual model updates throughout the learning procedure.
Our system achieves accuracy comparable to the baseline, with practical performance.
arXiv Detail & Related papers (2022-02-04T05:03:46Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Gradient Disaggregation: Breaking Privacy in Federated Learning by
Reconstructing the User Participant Matrix [12.678765681171022]
We show that aggregated model updates in federated learning may be insecure.
An untrusted central server may disaggregate user updates from sums of updates across participants.
Our attack enables the attribution of learned properties to individual users, violating anonymity.
arXiv Detail & Related papers (2021-06-10T23:55:28Z) - How to Put Users in Control of their Data in Federated Top-N
Recommendation with Learning to Rank [16.256897977543982]
We present FPL, an architecture in which users collaborate in training a central factorization model while controlling the amount of sensitive data leaving their devices.
The proposed approach implements pair-wise learning-to-rank optimization by following the Federated Learning principles.
arXiv Detail & Related papers (2020-08-17T10:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.