FedRecAttack: Model Poisoning Attack to Federated Recommendation
- URL: http://arxiv.org/abs/2204.01499v1
- Date: Fri, 1 Apr 2022 05:18:47 GMT
- Title: FedRecAttack: Model Poisoning Attack to Federated Recommendation
- Authors: Dazhong Rong, Shuai Ye, Ruoyan Zhao, Hon Ning Yuen, Jianhai Chen, and
Qinming He
- Abstract summary: Federated Recommendation (FR) has received considerable popularity and attention in the past few years.
In this paper we present FedRecAttack, a model poisoning attack to FR aiming to raise the exposure ratio of target items.
In most recommendation scenarios, apart from private user-item interactions (e.g., clicks, watches and purchases), some interactions are public.
Motivated by this point, in FedRecAttack we make use of the public interactions to approximate users' feature vectors.
- Score: 5.308983430479344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Recommendation (FR) has received considerable popularity and
attention in the past few years. In FR, for each user, its feature vector and
interaction data are kept locally on its own client thus are private to others.
Without the access to above information, most existing poisoning attacks
against recommender systems or federated learning lose validity. Benifiting
from this characteristic, FR is commonly considered fairly secured. However, we
argue that there is still possible and necessary security improvement could be
made in FR. To prove our opinion, in this paper we present FedRecAttack, a
model poisoning attack to FR aiming to raise the exposure ratio of target
items. In most recommendation scenarios, apart from private user-item
interactions (e.g., clicks, watches and purchases), some interactions are
public (e.g., likes, follows and comments). Motivated by this point, in
FedRecAttack we make use of the public interactions to approximate users'
feature vectors, thereby attacker can generate poisoned gradients accordingly
and control malicious users to upload the poisoned gradients in a well-designed
way. To evaluate the effectiveness and side effects of FedRecAttack, we conduct
extensive experiments on three real-world datasets of different sizes from two
completely different scenarios. Experimental results demonstrate that our
proposed FedRecAttack achieves the state-of-the-art effectiveness while its
side effects are negligible. Moreover, even with small proportion (3%) of
malicious users and small proportion (1%) of public interactions, FedRecAttack
remains highly effective, which reveals that FR is more vulnerable to attack
than people commonly considered.
Related papers
- Poisoning Decentralized Collaborative Recommender System and Its Countermeasures [37.205493894131635]
We present a novel attack method named Poisoning with Adaptive Malicious Neighbors (PAMN)
With item promotion in top-K recommendation as the attack objective, PAMN effectively boosts target items' ranks with several adversaries.
With the vulnerabilities of DecRecs uncovered, a dedicated defensive mechanism based on user-level gradient clipping with sparsified updating is proposed.
arXiv Detail & Related papers (2024-04-01T15:30:02Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks [20.55681622921858]
Model poisoning attacks greatly jeopardize the application of federated learning (FL)
In this work, we propose a novel proactive defense named RECESS against model poisoning attacks.
Unlike previous methods that score each iteration, RECESS considers clients' performance correlation across multiple iterations to estimate the trust score.
arXiv Detail & Related papers (2023-10-09T06:09:01Z) - FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks [8.866045560761528]
Federated learning (FL) is a distributed learning process that allows multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data.
Recent research has demonstrated the effectiveness of inference and poisoning attacks on FL.
We present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks.
arXiv Detail & Related papers (2023-10-03T14:55:30Z) - FedVal: Different good or different bad in federated learning [9.558549875692808]
Federated learning (FL) systems are susceptible to attacks from malicious actors.
FL poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups.
Traditional methods used to address such biases require centralized access to the data, which FL systems do not have.
We present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients.
arXiv Detail & Related papers (2023-06-06T22:11:13Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.