Knowledge-enhanced Black-box Attacks for Recommendations
- URL: http://arxiv.org/abs/2207.10307v1
- Date: Thu, 21 Jul 2022 04:59:31 GMT
- Title: Knowledge-enhanced Black-box Attacks for Recommendations
- Authors: Jingfan Chen, Wenqi Fan, Guanghui Zhu, Xiangyu Zhao, Chunfeng Yuan,
Qing Li, Yihua Huang
- Abstract summary: Deep neural networks-based recommender systems are vulnerable to adversarial attacks.
We propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies.
Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework.
- Score: 21.914252071143945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that deep neural networks-based recommender systems
are vulnerable to adversarial attacks, where attackers can inject carefully
crafted fake user profiles (i.e., a set of items that fake users have
interacted with) into a target recommender system to achieve malicious
purposes, such as promote or demote a set of target items. Due to the security
and privacy concerns, it is more practical to perform adversarial attacks under
the black-box setting, where the architecture/parameters and training data of
target systems cannot be easily accessed by attackers. However, generating
high-quality fake user profiles under black-box setting is rather challenging
with limited resources to target systems. To address this challenge, in this
work, we introduce a novel strategy by leveraging items' attribute information
(i.e., items' knowledge graph), which can be publicly accessible and provide
rich auxiliary knowledge to enhance the generation of fake user profiles. More
specifically, we propose a knowledge graph-enhanced black-box attacking
framework (KGAttack) to effectively learn attacking policies through deep
reinforcement learning techniques, in which knowledge graph is seamlessly
integrated into hierarchical policy networks to generate fake user profiles for
performing adversarial black-box attacks. Comprehensive experiments on various
real-world datasets demonstrate the effectiveness of the proposed attacking
framework under the black-box setting.
Related papers
- GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation [4.332441337407564]
We explore a connection between the susceptibility to membership inference attacks and the vulnerability to distillation-based functionality stealing attacks.
We propose GLiRA, a distillation-guided approach to membership inference attack on the black-box neural network.
We evaluate the proposed method across multiple image classification datasets and models and demonstrate that likelihood ratio attacks when guided by the knowledge distillation, outperform the current state-of-the-art membership inference attacks in the black-box setting.
arXiv Detail & Related papers (2024-05-13T08:52:04Z) - BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order Optimization [10.769992215544358]
Adversarial attack strategies assume that the adversary has access to the training data, the model parameters, and the input during deployment.
We propose an black-box adversarial attack strategy that produces adversarial patches which can be applied anywhere in the input image to perform an adversarial attack.
arXiv Detail & Related papers (2024-05-09T18:42:26Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Generalizable Black-Box Adversarial Attack with Meta Learning [54.196613395045595]
In black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful perturbation based on query feedback under a query budget.
We propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability.
The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance.
arXiv Detail & Related papers (2023-01-01T07:24:12Z) - Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition [99.29804193431823]
Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
arXiv Detail & Related papers (2022-11-23T17:47:49Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Attacking Black-box Recommendations via Copying Cross-domain User
Profiles [47.48722020494725]
We present our framework that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items.
CopyAttack's goal is to maximize the hit ratio of the targeted items in the Top-$k$ recommendation list of the users in the target domain.
arXiv Detail & Related papers (2020-05-17T02:10:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.