Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation
- URL: http://arxiv.org/abs/2504.06586v1
- Date: Wed, 09 Apr 2025 05:28:41 GMT
- Title: Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation
- Authors: Yuchuan Zhao, Tong Chen, Junliang Yu, Kai Zheng, Lizhen Cui, Hongzhi Yin,
- Abstract summary: Sequential recommender systems (SRSs) excel in capturing users' dynamic interests, thus playing a key role in industrial applications.<n>Existing attack mechanisms focus on increasing the ranks of target items in the recommendation list by injecting carefully crafted interactions.<n>We propose a diversity-aware Dual-promotion Sequential Poisoning attack method namedP for SRSs.
- Score: 46.58387906461697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommender systems (SRSs) excel in capturing users' dynamic interests, thus playing a key role in various industrial applications. The popularity of SRSs has also driven emerging research on their security aspects, where data poisoning attack for targeted item promotion is a typical example. Existing attack mechanisms primarily focus on increasing the ranks of target items in the recommendation list by injecting carefully crafted interactions (i.e., poisoning sequences), which comes at the cost of demoting users' real preferences. Consequently, noticeable recommendation accuracy drops are observed, restricting the stealthiness of the attack. Additionally, the generated poisoning sequences are prone to substantial repetition of target items, which is a result of the unitary objective of boosting their overall exposure and lack of effective diversity regularizations. Such homogeneity not only compromises the authenticity of these sequences, but also limits the attack effectiveness, as it ignores the opportunity to establish sequential dependencies between the target and many more items in the SRS. To address the issues outlined, we propose a Diversity-aware Dual-promotion Sequential Poisoning attack method named DDSP for SRSs. Specifically, by theoretically revealing the conflict between recommendation and existing attack objectives, we design a revamped attack objective that promotes the target item while maintaining the relevance of preferred items in a user's ranking list. We further develop a diversity-aware, auto-regressive poisoning sequence generator, where a re-ranking method is in place to sequentially pick the optimal items by integrating diversity constraints.
Related papers
- Controllable and Stealthy Shilling Attacks via Dispersive Latent Diffusion [47.012167601128745]
We present DLDA, a diffusion-based attack framework that generates highly effective yet indistinguishable fake users.<n>We show that, compared to prior attacks, DLDA consistently achieves stronger item promotion while remaining harder to detect.
arXiv Detail & Related papers (2025-08-04T01:54:32Z) - Phantom Subgroup Poisoning: Stealth Attacks on Federated Recommender Systems [34.21029914973687]
Federated recommender systems (FedRec) have emerged as a promising solution for delivering personalized recommendations.<n>Existing attacks typically target the entire user group, which compromises stealth and increases the risk of detection.<n>We introduce Spattack, the first targeted poisoning attack designed to manipulate recommendations for specific user subgroups.
arXiv Detail & Related papers (2025-07-07T09:40:16Z) - DARTS: A Dual-View Attack Framework for Targeted Manipulation in Federated Sequential Recommendation [0.0]
Federated recommendation (FedRec) preserves user privacy by enabling decentralized training of personalized models, but this architecture is inherently vulnerable to adversarial attacks.<n>We propose a novel dualview attack framework, named DV-FSR, which combines a sampling-based explicit strategy with a contrastive learning-based implicit gradient strategy to orchestrate a coordinated attack.
arXiv Detail & Related papers (2025-07-02T05:57:09Z) - AIM: Additional Image Guided Generation of Transferable Adversarial Attacks [72.24101555828256]
Transferable adversarial examples highlight the vulnerability of deep neural networks (DNNs) to imperceptible perturbations across various real-world applications.
In this work, we focus on generative approaches for targeted transferable attacks.
We introduce a novel plug-and-play module into the general generator architecture to enhance adversarial transferability.
arXiv Detail & Related papers (2025-01-02T07:06:49Z) - DV-FSR: A Dual-View Target Attack Framework for Federated Sequential Recommendation [4.980393474423609]
Federated recommendation (FedRec) preserves user privacy by enabling decentralized training of personalized models, but this architecture is inherently vulnerable to adversarial attacks.<n>We propose a novel dualview attack framework, named DV-FSR, which combines a sampling-based explicit strategy with a contrastive learning-based implicit gradient strategy to orchestrate a coordinated attack.
arXiv Detail & Related papers (2024-09-10T15:24:13Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game [153.74942025516853]
The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
arXiv Detail & Related papers (2022-09-13T05:59:02Z) - Defending Substitution-Based Profile Pollution Attacks on Sequential
Recommenders [8.828396559882954]
We propose a substitution-based adversarial attack algorithm, which modifies the input sequence by selecting certain vulnerable elements and substituting them with adversarial items.
We also design an efficient adversarial defense method called Dirichlet neighborhood sampling.
In particular, we represent selected items with one-hot encodings and perform gradient ascent on the encodings to search for the worst case linear combination of item embeddings in training.
arXiv Detail & Related papers (2022-07-19T00:19:13Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.