Attacking Black-box Recommendations via Copying Cross-domain User
Profiles
- URL: http://arxiv.org/abs/2005.08147v2
- Date: Sun, 24 Apr 2022 11:59:10 GMT
- Title: Attacking Black-box Recommendations via Copying Cross-domain User
Profiles
- Authors: Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jianping Wang,
Jiliang Tang, Qing Li
- Abstract summary: We present our framework that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items.
CopyAttack's goal is to maximize the hit ratio of the targeted items in the Top-$k$ recommendation list of the users in the target domain.
- Score: 47.48722020494725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, recommender systems that aim to suggest personalized lists of items
for users to interact with online have drawn a lot of attention. In fact, many
of these state-of-the-art techniques have been deep learning based. Recent
studies have shown that these deep learning models (in particular for
recommendation systems) are vulnerable to attacks, such as data poisoning,
which generates users to promote a selected set of items. However, more
recently, defense strategies have been developed to detect these generated
users with fake profiles. Thus, advanced injection attacks of creating more
`realistic' user profiles to promote a set of items is still a key challenge in
the domain of deep learning based recommender systems. In this work, we present
our framework CopyAttack, which is a reinforcement learning based black-box
attack method that harnesses real users from a source domain by copying their
profiles into the target domain with the goal of promoting a subset of items.
CopyAttack is constructed to both efficiently and effectively learn policy
gradient networks that first select, and then further refine/craft, user
profiles from the source domain to ultimately copy into the target domain.
CopyAttack's goal is to maximize the hit ratio of the targeted items in the
Top-$k$ recommendation list of the users in the target domain. We have
conducted experiments on two real-world datasets and have empirically verified
the effectiveness of our proposed framework and furthermore performed a
thorough model analysis.
Related papers
- Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Preference Poisoning Attacks on Reward Model Learning [47.00395978031771]
We investigate the nature and extent of a vulnerability in learning reward models from pairwise comparisons.
We propose two classes of algorithmic approaches for these attacks: a gradient-based framework, and several variants of rank-by-distance methods.
We find that the best attacks are often highly successful, achieving in the most extreme case 100% success rate with only 0.3% of the data poisoned.
arXiv Detail & Related papers (2024-02-02T21:45:24Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Knowledge-enhanced Black-box Attacks for Recommendations [21.914252071143945]
Deep neural networks-based recommender systems are vulnerable to adversarial attacks.
We propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies.
Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework.
arXiv Detail & Related papers (2022-07-21T04:59:31Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.