Membership Inference Attacks Against Recommender Systems
- URL: http://arxiv.org/abs/2109.08045v1
- Date: Thu, 16 Sep 2021 15:19:19 GMT
- Title: Membership Inference Attacks Against Recommender Systems
- Authors: Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhumin Chen,
Pengfei Hu, Yang Zhang
- Abstract summary: We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
- Score: 33.66394989281801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, recommender systems have achieved promising performances and become
one of the most widely used web applications. However, recommender systems are
often trained on highly sensitive user data, thus potential data leakage from
recommender systems may lead to severe privacy problems.
In this paper, we make the first attempt on quantifying the privacy leakage
of recommender systems through the lens of membership inference. In contrast
with traditional membership inference against machine learning classifiers, our
attack faces two main differences. First, our attack is on the user-level but
not on the data sample-level. Second, the adversary can only observe the
ordered recommended items from a recommender system instead of prediction
results in the form of posterior probabilities. To address the above
challenges, we propose a novel method by representing users from relevant
items. Moreover, a shadow recommender is established to derive the labeled
training data for training the attack model. Extensive experimental results
show that our attack framework achieves a strong performance. In addition, we
design a defense mechanism to effectively mitigate the membership inference
threat of recommender systems.
Related papers
- Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Membership Inference Attacks Against Latent Factor Model [0.0]
We use the latent factor model as the recommender to get the list of recommended items.
A shadow recommender is established to derive the labeled training data for the attack model.
The experimental data show that the AUC index of our attack model can reach 0.857 on the real dataset MovieLens.
arXiv Detail & Related papers (2022-12-15T08:16:08Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z) - Knowledge Transfer via Pre-training for Recommendation: A Review and
Prospect [89.91745908462417]
We show the benefits of pre-training to recommender systems through experiments.
We discuss several promising directions for future research for recommender systems with pre-training.
arXiv Detail & Related papers (2020-09-19T13:06:27Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.