Adversarial Item Promotion: Vulnerabilities at the Core of Top-N
Recommenders that Use Images to Address Cold Start
- URL: http://arxiv.org/abs/2006.01888v3
- Date: Tue, 20 Oct 2020 13:05:48 GMT
- Title: Adversarial Item Promotion: Vulnerabilities at the Core of Top-N
Recommenders that Use Images to Address Cold Start
- Authors: Zhuoran Liu and Martha Larson
- Abstract summary: We show how unscrupulous merchants can create item images that artificially promote their products, improving their rankings.
We describe a new type of attack, Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N recommenders.
We show that using images to address cold start opens recommender systems to potential threats with clear practical implications.
- Score: 3.640517671681518
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate potential
defenses, including adversarial training and find that common,
currently-existing, techniques do not eliminate the danger of AIP attacks. In
sum, we show that using images to address cold start opens recommender systems
to potential threats with clear practical implications.
Related papers
- Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Practical Relative Order Attack in Deep Ranking [99.332629807873]
We formulate a new adversarial attack against deep ranking systems, i.e., the Order Attack.
The Order Attack covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation.
It is successfully implemented on a major e-commerce platform.
arXiv Detail & Related papers (2021-03-09T06:41:18Z) - QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval [56.51916317628536]
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting.
A new relevance-based loss is designed to quantify the attack effects by measuring the set similarity on the top-k retrieval results before and after attacks.
Experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
arXiv Detail & Related papers (2021-03-04T10:18:43Z) - A Black-Box Attack Model for Visually-Aware Recommender Systems [7.226144684379191]
Visually-aware recommender systems (RS) have recently attracted increased research interest.
In this work, we show that relying on external sources can make an RS vulnerable to attacks.
We show how a new visual attack model can effectively influence the item scores and rankings in a black-box approach.
arXiv Detail & Related papers (2020-11-05T08:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.