Practical Cross-System Shilling Attacks with Limited Access to Data
- URL: http://arxiv.org/abs/2302.07145v2
- Date: Sat, 18 Mar 2023 08:29:20 GMT
- Title: Practical Cross-System Shilling Attacks with Limited Access to Data
- Authors: Meifang Zeng, Ke Li, Bingchuan Jiang, Liujuan Cao, Hui Li
- Abstract summary: In shilling attacks, an adversarial party injects a few fake user profiles into a Recommender System (RS) so that the target item can be promoted or demoted.
In this paper, we analyze the properties a practical shilling attack method should have and propose a new concept of Cross-system Attack.
- Score: 14.904685178603255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In shilling attacks, an adversarial party injects a few fake user profiles
into a Recommender System (RS) so that the target item can be promoted or
demoted. Although much effort has been devoted to developing shilling attack
methods, we find that existing approaches are still far from practical. In this
paper, we analyze the properties a practical shilling attack method should have
and propose a new concept of Cross-system Attack. With the idea of Cross-system
Attack, we design a Practical Cross-system Shilling Attack (PC-Attack)
framework that requires little information about the victim RS model and the
target RS data for conducting attacks. PC-Attack is trained to capture graph
topology knowledge from public RS data in a self-supervised manner. Then, it is
fine-tuned on a small portion of target data that is easy to access to
construct fake profiles. Extensive experiments have demonstrated the
superiority of PC-Attack over state-of-the-art baselines. Our implementation of
PC-Attack is available at https://github.com/KDEGroup/PC-Attack.
Related papers
- Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - Rethinking Targeted Adversarial Attacks For Neural Machine Translation [56.10484905098989]
This paper presents a new setting for NMT targeted adversarial attacks that could lead to reliable attacking results.
Under the new setting, it then proposes a Targeted Word Gradient adversarial Attack (TWGA) method to craft adversarial examples.
Experimental results demonstrate that our proposed setting could provide faithful attacking results for targeted adversarial attacks on NMT systems.
arXiv Detail & Related papers (2024-07-07T10:16:06Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - DTA: Distribution Transform-based Attack for Query-Limited Scenario [11.874670564015789]
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models.
This paper proposes a hard-label attack that simulates an attacked action being permitted to conduct a limited number of queries.
Experiments validate the effectiveness of the proposed idea and the superiority of DTA over the state-of-the-art.
arXiv Detail & Related papers (2023-12-12T13:21:03Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Knowledge-enhanced Black-box Attacks for Recommendations [21.914252071143945]
Deep neural networks-based recommender systems are vulnerable to adversarial attacks.
We propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies.
Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework.
arXiv Detail & Related papers (2022-07-21T04:59:31Z) - Shilling Black-box Recommender Systems by Learning to Generate Fake User
Profiles [14.437087775166876]
We present Leg-UP, a novel attack model based on the Generative Adversarial Network.
Leg-UP learns user behavior patterns from real users in the sampled templates'' and constructs fake user profiles.
Experiments on benchmarks have shown that Leg-UP exceeds state-of-the-art Shilling Attack methods on a wide range of victim RS models.
arXiv Detail & Related papers (2022-06-23T00:40:19Z) - A Targeted Attack on Black-Box Neural Machine Translation with Parallel
Data Poisoning [60.826628282900955]
We show that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data.
We show that this attack can be realised practically via targeted corruption of web documents crawled to form the system's training data.
Our results are alarming: even on the state-of-the-art systems trained with massive parallel data, the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets.
arXiv Detail & Related papers (2020-11-02T01:52:46Z) - Attacking Recommender Systems with Augmented User Profiles [35.52681676059885]
We study the shilling attack: a subsistent and profitable attack where an adversarial party injects a number of user profiles to promote or demote a target item.
We present a novel Augmented Shilling Attack framework (AUSH) and implement it with the idea of Generative Adversarial Network.
AUSH is capable of tailoring attacks against RS according to budget and complex attack goals, such as targeting a specific user group.
arXiv Detail & Related papers (2020-05-17T04:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.