IndirectAD: Practical Data Poisoning Attacks against Recommender Systems for Item Promotion
- URL: http://arxiv.org/abs/2511.05845v1
- Date: Sat, 08 Nov 2025 04:27:34 GMT
- Title: IndirectAD: Practical Data Poisoning Attacks against Recommender Systems for Item Promotion
- Authors: Zihao Wang, Tianhao Mao, XiaoFeng Wang, Di Tang, Xiaozhong Liu,
- Abstract summary: We introduce the IndirectAD attack, inspired by Trojan attacks on machine learning.<n>IndirectAD reduces the need for a high poisoning ratio through a trigger item that is easier to recommend to the target users.<n>Our experiments show that IndirectAD can cause noticeable impact with only 0.05% of the platform's user base.
- Score: 31.741013175459695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems play a central role in digital platforms by providing personalized content. They often use methods such as collaborative filtering and machine learning to accurately predict user preferences. Although these systems offer substantial benefits, they are vulnerable to security and privacy threats, especially data poisoning attacks. By inserting misleading data, attackers can manipulate recommendations for purposes ranging from boosting product visibility to shaping public opinion. Despite these risks, concerns are often downplayed because such attacks typically require controlling at least 1% of the platform's user base, a difficult task on large platforms. We tackle this issue by introducing the IndirectAD attack, inspired by Trojan attacks on machine learning. IndirectAD reduces the need for a high poisoning ratio through a trigger item that is easier to recommend to the target users. Rather than directly promoting a target item that does not match a user's interests, IndirectAD first promotes the trigger item, then transfers that advantage to the target item by creating co-occurrence data between them. This indirect strategy delivers a stronger promotion effect while using fewer controlled user accounts. Our extensive experiments on multiple datasets and recommender systems show that IndirectAD can cause noticeable impact with only 0.05% of the platform's user base. Even in large-scale settings, IndirectAD remains effective, highlighting a more serious and realistic threat to today's recommender systems.
Related papers
- Phantom Subgroup Poisoning: Stealth Attacks on Federated Recommender Systems [34.21029914973687]
Federated recommender systems (FedRec) have emerged as a promising solution for delivering personalized recommendations.<n>Existing attacks typically target the entire user group, which compromises stealth and increases the risk of detection.<n>We introduce Spattack, the first targeted poisoning attack designed to manipulate recommendations for specific user subgroups.
arXiv Detail & Related papers (2025-07-07T09:40:16Z) - Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z) - Attacking Black-box Recommendations via Copying Cross-domain User
Profiles [47.48722020494725]
We present our framework that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items.
CopyAttack's goal is to maximize the hit ratio of the targeted items in the Top-$k$ recommendation list of the users in the target domain.
arXiv Detail & Related papers (2020-05-17T02:10:38Z) - Influence Function based Data Poisoning Attacks to Top-N Recommender
Systems [43.14766256772]
An attacker can trick a recommender system to recommend a target item to as many normal users as possible.
We develop a data poisoning attack to solve this problem.
Our results show that our attacks are effective and outperform existing methods.
arXiv Detail & Related papers (2020-02-19T06:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.