Shilling Black-box Recommender Systems by Learning to Generate Fake User
Profiles
- URL: http://arxiv.org/abs/2206.11433v1
- Date: Thu, 23 Jun 2022 00:40:19 GMT
- Title: Shilling Black-box Recommender Systems by Learning to Generate Fake User
Profiles
- Authors: Chen Lin, Si Chen, Meifang Zeng, Sheng Zhang, Min Gao, Hui Li
- Abstract summary: We present Leg-UP, a novel attack model based on the Generative Adversarial Network.
Leg-UP learns user behavior patterns from real users in the sampled templates'' and constructs fake user profiles.
Experiments on benchmarks have shown that Leg-UP exceeds state-of-the-art Shilling Attack methods on a wide range of victim RS models.
- Score: 14.437087775166876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the pivotal role of Recommender Systems (RS) in guiding customers
towards the purchase, there is a natural motivation for unscrupulous parties to
spoof RS for profits. In this paper, we study Shilling Attack where an
adversarial party injects a number of fake user profiles for improper purposes.
Conventional Shilling Attack approaches lack attack transferability (i.e.,
attacks are not effective on some victim RS models) and/or attack invisibility
(i.e., injected profiles can be easily detected). To overcome these issues, we
present Leg-UP, a novel attack model based on the Generative Adversarial
Network. Leg-UP learns user behavior patterns from real users in the sampled
``templates'' and constructs fake user profiles. To simulate real users, the
generator in Leg-UP directly outputs discrete ratings. To enhance attack
transferability, the parameters of the generator are optimized by maximizing
the attack performance on a surrogate RS model. To improve attack invisibility,
Leg-UP adopts a discriminator to guide the generator to generate undetectable
fake user profiles. Experiments on benchmarks have shown that Leg-UP exceeds
state-of-the-art Shilling Attack methods on a wide range of victim RS models.
The source code of our work is available at:
https://github.com/XMUDM/ShillingAttack.
Related papers
- Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - DTA: Distribution Transform-based Attack for Query-Limited Scenario [11.874670564015789]
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models.
This paper proposes a hard-label attack that simulates an attacked action being permitted to conduct a limited number of queries.
Experiments validate the effectiveness of the proposed idea and the superiority of DTA over the state-of-the-art.
arXiv Detail & Related papers (2023-12-12T13:21:03Z) - Black-Box Training Data Identification in GANs via Detector Networks [2.4554686192257424]
We study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.
This is of interest for both reasons related to copyright, where a user may want to determine if their copyrighted data has been used to train a GAN, and in the study of data privacy, where the ability to detect training set membership is known as a membership inference attack.
We introduce a suite of membership inference attacks against GANs in the black-box setting and evaluate our attacks
arXiv Detail & Related papers (2023-10-18T15:53:20Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Generalizable Black-Box Adversarial Attack with Meta Learning [54.196613395045595]
In black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful perturbation based on query feedback under a query budget.
We propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability.
The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance.
arXiv Detail & Related papers (2023-01-01T07:24:12Z) - Ready for Emerging Threats to Recommender Systems? A Graph
Convolution-based Generative Shilling Attack [8.591490818966882]
Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules.
upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations.
In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
arXiv Detail & Related papers (2021-07-22T05:02:59Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Attacking Recommender Systems with Augmented User Profiles [35.52681676059885]
We study the shilling attack: a subsistent and profitable attack where an adversarial party injects a number of user profiles to promote or demote a target item.
We present a novel Augmented Shilling Attack framework (AUSH) and implement it with the idea of Generative Adversarial Network.
AUSH is capable of tailoring attacks against RS according to budget and complex attack goals, such as targeting a specific user group.
arXiv Detail & Related papers (2020-05-17T04:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.