Debiasing Learning for Membership Inference Attacks Against Recommender
Systems
- URL: http://arxiv.org/abs/2206.12401v2
- Date: Tue, 28 Jun 2022 15:46:57 GMT
- Title: Debiasing Learning for Membership Inference Attacks Against Recommender
Systems
- Authors: Zihan Wang, Na Huang, Fei Sun, Pengjie Ren, Zhumin Chen, Hengliang
Luo, Maarten de Rijke, Zhaochun Ren
- Abstract summary: Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
- Score: 79.48353547307887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learned recommender systems may inadvertently leak information about their
training data, leading to privacy violations. We investigate privacy threats
faced by recommender systems through the lens of membership inference. In such
attacks, an adversary aims to infer whether a user's data is used to train the
target recommender. To achieve this, previous work has used a shadow
recommender to derive training data for the attack model, and then predicts the
membership by calculating difference vectors between users' historical
interactions and recommended items. State-of-the-art methods face two
challenging problems: (1) training data for the attack model is biased due to
the gap between shadow and target recommenders, and (2) hidden states in
recommenders are not observational, resulting in inaccurate estimations of
difference vectors. To address the above limitations, we propose a Debiasing
Learning for Membership Inference Attacks against recommender systems (DL-MIA)
framework that has four main components: (1) a difference vector generator, (2)
a disentangled encoder, (3) a weight estimator, and (4) an attack model. To
mitigate the gap between recommenders, a variational auto-encoder (VAE) based
disentangled encoder is devised to identify recommender invariant and specific
features. To reduce the estimation bias, we design a weight estimator,
assigning a truth-level score for each difference vector to indicate estimation
accuracy. We evaluate DL-MIA against both general recommenders and sequential
recommenders on three real-world datasets. Experimental results show that
DL-MIA effectively alleviates training and estimation biases simultaneously,
and achieves state-of-the-art attack performance.
Related papers
- Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Making Users Indistinguishable: Attribute-wise Unlearning in Recommender
Systems [28.566330708233824]
We find that attackers can extract private information, i.e., gender, race, and age, from a trained model even if it has not been explicitly encountered during training.
To protect the sensitive attribute of users, Attribute Unlearning (AU) aims to degrade attacking performance and make target attributes indistinguishable.
arXiv Detail & Related papers (2023-10-06T09:36:44Z) - Membership Inference Attacks Against Latent Factor Model [0.0]
We use the latent factor model as the recommender to get the list of recommended items.
A shadow recommender is established to derive the labeled training data for the attack model.
The experimental data show that the AUC index of our attack model can reach 0.857 on the real dataset MovieLens.
arXiv Detail & Related papers (2022-12-15T08:16:08Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.