On Detecting Data Pollution Attacks On Recommender Systems Using
Sequential GANs
- URL: http://arxiv.org/abs/2012.02509v1
- Date: Fri, 4 Dec 2020 10:31:28 GMT
- Title: On Detecting Data Pollution Attacks On Recommender Systems Using
Sequential GANs
- Authors: Behzad Shahrasbi, Venugopal Mani, Apoorv Reddy Arrabothu, Deepthi
Sharma, Kannan Achan, Sushant Kumar
- Abstract summary: A malicious actor may be motivated to sway the output of recommender systems by injecting malicious datapoints.
We propose a semi-supervised attack detection algorithm to identify the malicious datapoints.
- Score: 9.497749999148107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems are an essential part of any e-commerce platform.
Recommendations are typically generated by aggregating large amounts of user
data. A malicious actor may be motivated to sway the output of such recommender
systems by injecting malicious datapoints to leverage the system for financial
gain. In this work, we propose a semi-supervised attack detection algorithm to
identify the malicious datapoints. We do this by leveraging a portion of the
dataset that has a lower chance of being polluted to learn the distribution of
genuine datapoints. Our proposed approach modifies the Generative Adversarial
Network architecture to take into account the contextual information from user
activity. This allows the model to distinguish legitimate datapoints from the
injected ones.
Related papers
- Advancing Recommender Systems by mitigating Shilling attacks [0.0]
Collaborative filtering is a widely used method for computing recommendations due to its good performance.
This paper proposes an algorithm to detect such shilling profiles in the system accurately and also study the effects of such profiles on the recommendations.
arXiv Detail & Related papers (2024-04-24T20:05:39Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - FedCL: Federated Contrastive Learning for Privacy-Preserving
Recommendation [98.5705258907774]
FedCL can exploit high-quality negative samples for effective model training with privacy well protected.
We first infer user embeddings from local user data through the local model on each client, and then perturb them with local differential privacy (LDP)
Since individual user embedding contains heavy noise due to LDP, we propose to cluster user embeddings on the server to mitigate the influence of noise.
arXiv Detail & Related papers (2022-04-21T02:37:10Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.