Controllable and Stealthy Shilling Attacks via Dispersive Latent Diffusion
- URL: http://arxiv.org/abs/2508.01987v1
- Date: Mon, 04 Aug 2025 01:54:32 GMT
- Title: Controllable and Stealthy Shilling Attacks via Dispersive Latent Diffusion
- Authors: Shutong Qiao, Wei Yuan, Junliang Yu, Tong Chen, Quoc Viet Hung Nguyen, Hongzhi Yin,
- Abstract summary: We present DLDA, a diffusion-based attack framework that generates highly effective yet indistinguishable fake users.<n>We show that, compared to prior attacks, DLDA consistently achieves stronger item promotion while remaining harder to detect.
- Score: 47.012167601128745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems (RSs) are now fundamental to various online platforms, but their dependence on user-contributed data leaves them vulnerable to shilling attacks that can manipulate item rankings by injecting fake users. Although widely studied, most existing attack models fail to meet two critical objectives simultaneously: achieving strong adversarial promotion of target items while maintaining realistic behavior to evade detection. As a result, the true severity of shilling threats that manage to reconcile the two objectives remains underappreciated. To expose this overlooked vulnerability, we present DLDA, a diffusion-based attack framework that can generate highly effective yet indistinguishable fake users by enabling fine-grained control over target promotion. Specifically, DLDA operates in a pre-aligned collaborative embedding space, where it employs a conditional latent diffusion process to iteratively synthesize fake user profiles with precise target item control. To evade detection, DLDA introduces a dispersive regularization mechanism that promotes variability and realism in generated behavioral patterns. Extensive experiments on three real-world datasets and five popular RS models demonstrate that, compared to prior attacks, DLDA consistently achieves stronger item promotion while remaining harder to detect. These results highlight that modern RSs are more vulnerable than previously recognized, underscoring the urgent need for more robust defenses.
Related papers
- Generative Adversarial Evasion and Out-of-Distribution Detection for UAV Cyber-Attacks [6.956559003734227]
This paper introduces a conditional generative adversarial network (cGAN)-based framework for crafting stealthy adversarial attacks that evade IDS mechanisms.<n>Our findings emphasize the importance of advanced probabilistic modeling to strengthen IDS capabilities against adaptive, generative-model-based cyber intrusions.
arXiv Detail & Related papers (2025-06-26T10:56:34Z) - Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation [46.58387906461697]
Sequential recommender systems (SRSs) excel in capturing users' dynamic interests, thus playing a key role in industrial applications.<n>Existing attack mechanisms focus on increasing the ranks of target items in the recommendation list by injecting carefully crafted interactions.<n>We propose a diversity-aware Dual-promotion Sequential Poisoning attack method namedP for SRSs.
arXiv Detail & Related papers (2025-04-09T05:28:41Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.<n>It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.<n>It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2025-03-09T09:24:24Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.<n>It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.<n>It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2024-11-21T15:50:59Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - ToDA: Target-oriented Diffusion Attacker against Recommendation System [19.546532220090793]
Recommendation systems (RS) are susceptible to malicious attacks where adversaries can manipulate user profiles, leading to biased recommendations.
Recent research often integrates additional modules using generative models to craft these deceptive user profiles.
We propose a novel Target-oriented Diffusion Attack model (ToDA)
It incorporates a pre-trained autoencoder that transforms user profiles into a high dimensional space, paired with a Latent Diffusion Attacker (LDA)-the core component of ToDA.
arXiv Detail & Related papers (2024-01-23T09:12:26Z) - Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks [48.911832772464145]
Contrastive learning (CL) has recently gained prominence in the domain of recommender systems.
This paper identifies a vulnerability of CL-based recommender systems that they are more susceptible to poisoning attacks aiming to promote individual items.
arXiv Detail & Related papers (2023-11-30T04:25:28Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.