Towards Adversarially Robust Recommendation from Adaptive Fraudster
Detection
- URL: http://arxiv.org/abs/2211.11534v3
- Date: Sat, 20 May 2023 08:30:21 GMT
- Title: Towards Adversarially Robust Recommendation from Adaptive Fraudster
Detection
- Authors: Yuni Lai, Yulin Zhu, Wenqi Fan, Xiaoge Zhang, Kai Zhou
- Abstract summary: GraphRfi, a GNN-based recommender system, was proposed and shown to effectively mitigate the impact of injected fake users.
We demonstrate that GraphRfi remains vulnerable to attacks due to the supervised nature of its fraudster detection component.
In particular, we propose a powerful poisoning attack, MetaC, against both GNN-based and MF-based recommender systems.
- Score: 9.756305372960423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness of recommender systems under node injection attacks has
garnered significant attention. Recently, GraphRfi, a GNN-based recommender
system, was proposed and shown to effectively mitigate the impact of injected
fake users. However, we demonstrate that GraphRfi remains vulnerable to attacks
due to the supervised nature of its fraudster detection component, where
obtaining clean labels is challenging in practice. In particular, we propose a
powerful poisoning attack, MetaC, against both GNN-based and MF-based
recommender systems. Furthermore, we analyze why GraphRfi fails under such an
attack. Then, based on our insights obtained from vulnerability analysis, we
design an adaptive fraudster detection module that explicitly considers label
uncertainty. This module can serve as a plug-in for different recommender
systems, resulting in a robust framework named PDR. Comprehensive experiments
show that our defense approach outperforms other benchmark methods under
attacks. Overall, our research presents an effective framework for integrating
fraudster detection into recommendation systems to achieve adversarial
robustness.
Related papers
- Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - Identifying Backdoored Graphs in Graph Neural Network Training: An Explanation-Based Approach with Novel Metrics [13.93535590008316]
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks.
We devised a novel detection method that creatively leverages graph-level explanations.
Our results show that our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks.
arXiv Detail & Related papers (2024-03-26T22:41:41Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Node-aware Bi-smoothing: Certified Robustness against Graph Injection
Attacks [5.660584039688214]
Deep Graph Learning (DGL) has emerged as a crucial technique across various domains.
Recent studies have exposed vulnerabilities in DGL models, such as susceptibility to evasion and poisoning attacks.
We introduce the node-aware bi-smoothing framework, which is the first certifiably robust approach for general node classification tasks against GIAs.
arXiv Detail & Related papers (2023-12-07T01:24:48Z) - Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks [48.911832772464145]
Contrastive learning (CL) has recently gained prominence in the domain of recommender systems.
This paper identifies a vulnerability of CL-based recommender systems that they are more susceptible to poisoning attacks aiming to promote individual items.
arXiv Detail & Related papers (2023-11-30T04:25:28Z) - Combating Advanced Persistent Threats: Challenges and Solutions [20.81151411772311]
The rise of advanced persistent threats (APTs) has marked a significant cybersecurity challenge.
Provenance graph-based kernel-level auditing has emerged as a promising approach to enhance visibility and traceability.
This paper proposes an efficient and robust APT defense scheme leveraging provenance graphs, including a network-level distributed audit model for cost-effective lateral attack reconstruction.
arXiv Detail & Related papers (2023-09-18T05:46:11Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Indicators of Attack Failure: Debugging and Improving Optimization of
Adversarial Examples [29.385242714424624]
evaluating robustness of machine-learning models to adversarial examples is a challenging problem.
We define a set of quantitative indicators which unveil common failures in the optimization of gradient-based attacks.
Our experimental analysis shows that the proposed indicators of failure can be used to visualize, debug and improve current adversarial robustness evaluations.
arXiv Detail & Related papers (2021-06-18T06:57:58Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.