Ready for Emerging Threats to Recommender Systems? A Graph
Convolution-based Generative Shilling Attack
- URL: http://arxiv.org/abs/2107.10457v1
- Date: Thu, 22 Jul 2021 05:02:59 GMT
- Title: Ready for Emerging Threats to Recommender Systems? A Graph
Convolution-based Generative Shilling Attack
- Authors: Fan Wu, Min Gao, Junliang Yu, Zongwei Wang, Kecheng Liu and Xu Wange
- Abstract summary: Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules.
upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations.
In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
- Score: 8.591490818966882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To explore the robustness of recommender systems, researchers have proposed
various shilling attack models and analyzed their adverse effects. Primitive
attacks are highly feasible but less effective due to simplistic handcrafted
rules, while upgraded attacks are more powerful but costly and difficult to
deploy because they require more knowledge from recommendations. In this paper,
we explore a novel shilling attack called Graph cOnvolution-based generative
shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
GOAT adopts the primitive attacks' paradigm that assigns items for fake users
by sampling and the upgraded attacks' paradigm that generates fake ratings by a
deep learning-based model. It deploys a generative adversarial network (GAN)
that learns the real rating distribution to generate fake ratings.
Additionally, the generator combines a tailored graph convolution structure
that leverages the correlations between co-rated items to smoothen the fake
ratings and enhance their authenticity. The extensive experiments on two public
datasets evaluate GOAT's performance from multiple perspectives. Our study of
the GOAT demonstrates technical feasibility for building a more powerful and
intelligent attack model with a much-reduced cost, enables analysis the threat
of such an attack and guides for investigating necessary prevention measures.
Related papers
- Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [26.403625710805418]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.
We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.
We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox
Generative Model Trigger [11.622811907571132]
Textual backdoor attacks pose a practical threat to existing systems.
With cutting-edge generative models such as GPT-4 pushing rewriting to extraordinary levels, such attacks are becoming even harder to detect.
We conduct a comprehensive investigation of the role of black-box generative models as a backdoor attack tool, highlighting the importance of researching relative defense strategies.
arXiv Detail & Related papers (2023-04-27T19:26:25Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
Sparsification [24.053704318868043]
In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks by uploading "poisoned" updates.
We introduce algoname, a novel defense that uses global top-k update sparsification and device-level clipping gradient to mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-12-12T16:34:52Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Revisiting Adversarially Learned Injection Attacks Against Recommender
Systems [6.920518936054493]
This paper revisits the adversarially-learned injection attack problem.
We show that the exact solution for generating fake users as an optimization problem could lead to a much larger impact.
arXiv Detail & Related papers (2020-08-11T17:30:02Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.