PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
- URL: http://arxiv.org/abs/2303.14601v1
- Date: Sun, 26 Mar 2023 01:38:11 GMT
- Title: PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
- Authors: Jinyuan Jia and Yupei Liu and Yuepeng Hu and Neil Zhenqiang Gong
- Abstract summary: We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
- Score: 58.26750515059222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data poisoning attacks spoof a recommender system to make arbitrary,
attacker-desired recommendations via injecting fake users with carefully
crafted rating scores into the recommender system. We envision a cat-and-mouse
game for such data poisoning attacks and their defenses, i.e., new defenses are
designed to defend against existing attacks and new attacks are designed to
break them. To prevent such a cat-and-mouse game, we propose PORE, the first
framework to build provably robust recommender systems in this work. PORE can
transform any existing recommender system to be provably robust against any
untargeted data poisoning attacks, which aim to reduce the overall performance
of a recommender system. Suppose PORE recommends top-$N$ items to a user when
there is no attack. We prove that PORE still recommends at least $r$ of the $N$
items to the user under any data poisoning attack, where $r$ is a function of
the number of fake users in the attack. Moreover, we design an efficient
algorithm to compute $r$ for each user. We empirically evaluate PORE on popular
benchmark datasets.
Related papers
- Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Ready for Emerging Threats to Recommender Systems? A Graph
Convolution-based Generative Shilling Attack [8.591490818966882]
Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules.
upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations.
In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
arXiv Detail & Related papers (2021-07-22T05:02:59Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Influence Function based Data Poisoning Attacks to Top-N Recommender
Systems [43.14766256772]
An attacker can trick a recommender system to recommend a target item to as many normal users as possible.
We develop a data poisoning attack to solve this problem.
Our results show that our attacks are effective and outperform existing methods.
arXiv Detail & Related papers (2020-02-19T06:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.