Membership Inference Attacks Against Latent Factor Model
- URL: http://arxiv.org/abs/2301.03596v1
- Date: Thu, 15 Dec 2022 08:16:08 GMT
- Title: Membership Inference Attacks Against Latent Factor Model
- Authors: Dazhi Hu
- Abstract summary: We use the latent factor model as the recommender to get the list of recommended items.
A shadow recommender is established to derive the labeled training data for the attack model.
The experimental data show that the AUC index of our attack model can reach 0.857 on the real dataset MovieLens.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of the information age has led to the problems of information
overload and unclear demands. As an information filtering system, personalized
recommendation systems predict users' behavior and preference for items and
improves users' information acquisition efficiency. However, recommendation
systems usually use highly sensitive user data for training. In this paper, we
use the latent factor model as the recommender to get the list of recommended
items, and we representing users from relevant items Compared with the
traditional member inference against machine learning classifiers. We construct
a multilayer perceptron model with two hidden layers as the attack model to
complete the member inference. Moreover, a shadow recommender is established to
derive the labeled training data for the attack model. The attack model is
trained on the dataset generated by the shadow recommender and tested on the
dataset generated by the target recommender. The experimental data show that
the AUC index of our attack model can reach 0.857 on the real dataset
MovieLens, which shows that the attack model has good performance.
Related papers
- Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs [61.04246774006429]
We introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent.
We observe that our instruction-based prompts generate outputs with 23.7% higher overlap with training data compared to the baseline prefix-suffix measurements.
Our findings show that instruction-tuned models can expose pre-training data as much as their base-models, if not more so, and using instructions proposed by other LLMs can open a new avenue of automated attacks.
arXiv Detail & Related papers (2024-03-05T19:32:01Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Making Users Indistinguishable: Attribute-wise Unlearning in Recommender
Systems [28.566330708233824]
We find that attackers can extract private information, i.e., gender, race, and age, from a trained model even if it has not been explicitly encountered during training.
To protect the sensitive attribute of users, Attribute Unlearning (AU) aims to degrade attacking performance and make target attributes indistinguishable.
arXiv Detail & Related papers (2023-10-06T09:36:44Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z) - MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive
Model Selection [110.87712780017819]
We propose a meta-learning framework to facilitate user-level adaptive model selection in recommender systems.
We conduct experiments on two public datasets and a real-world production dataset.
arXiv Detail & Related papers (2020-01-22T16:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.