Attacking Recommender Systems with Augmented User Profiles
- URL: http://arxiv.org/abs/2005.08164v2
- Date: Thu, 23 Jul 2020 14:22:49 GMT
- Title: Attacking Recommender Systems with Augmented User Profiles
- Authors: Chen Lin, Si Chen, Hui Li, Yanghua Xiao, Lianyun Li, Qian Yang
- Abstract summary: We study the shilling attack: a subsistent and profitable attack where an adversarial party injects a number of user profiles to promote or demote a target item.
We present a novel Augmented Shilling Attack framework (AUSH) and implement it with the idea of Generative Adversarial Network.
AUSH is capable of tailoring attacks against RS according to budget and complex attack goals, such as targeting a specific user group.
- Score: 35.52681676059885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation Systems (RS) have become an essential part of many online
services. Due to its pivotal role in guiding customers towards purchasing,
there is a natural motivation for unscrupulous parties to spoof RS for profits.
In this paper, we study the shilling attack: a subsistent and profitable attack
where an adversarial party injects a number of user profiles to promote or
demote a target item. Conventional shilling attack models are based on simple
heuristics that can be easily detected, or directly adopt adversarial attack
methods without a special design for RS. Moreover, the study on the attack
impact on deep learning based RS is missing in the literature, making the
effects of shilling attack against real RS doubtful. We present a novel
Augmented Shilling Attack framework (AUSH) and implement it with the idea of
Generative Adversarial Network. AUSH is capable of tailoring attacks against RS
according to budget and complex attack goals, such as targeting a specific user
group. We experimentally show that the attack impact of AUSH is noticeable on a
wide range of RS including both classic and modern deep learning based RS,
while it is virtually undetectable by the state-of-the-art attack detection
model.
Related papers
- Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations [7.361316528368866]
This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks.
By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly.
Experimental results on a 152-host example network confirm the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-06-25T14:16:40Z) - Poisoning Attacks and Defenses in Recommender Systems: A Survey [39.25402612579371]
Modern recommender systems (RS) have profoundly enhanced user experience across digital platforms, yet they face significant threats from poisoning attacks.
This survey presents a unique perspective by examining these threats through the lens of an attacker.
We detail a systematic pipeline that encompasses four stages of a poisoning attack: setting attack goals, assessing attacker capabilities, analyzing victim architecture, and implementing poisoning strategies.
arXiv Detail & Related papers (2024-06-03T06:08:02Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - Practical Cross-System Shilling Attacks with Limited Access to Data [14.904685178603255]
In shilling attacks, an adversarial party injects a few fake user profiles into a Recommender System (RS) so that the target item can be promoted or demoted.
In this paper, we analyze the properties a practical shilling attack method should have and propose a new concept of Cross-system Attack.
arXiv Detail & Related papers (2023-02-14T15:53:12Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning [102.05872020792603]
We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients.
We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds.
arXiv Detail & Related papers (2022-10-17T17:59:38Z) - Shilling Black-box Recommender Systems by Learning to Generate Fake User
Profiles [14.437087775166876]
We present Leg-UP, a novel attack model based on the Generative Adversarial Network.
Leg-UP learns user behavior patterns from real users in the sampled templates'' and constructs fake user profiles.
Experiments on benchmarks have shown that Leg-UP exceeds state-of-the-art Shilling Attack methods on a wide range of victim RS models.
arXiv Detail & Related papers (2022-06-23T00:40:19Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.