Discussion about Attacks and Defenses for Fair and Robust Recommendation
System Design
- URL: http://arxiv.org/abs/2210.07817v1
- Date: Wed, 28 Sep 2022 13:00:26 GMT
- Title: Discussion about Attacks and Defenses for Fair and Robust Recommendation
System Design
- Authors: Mirae Kim, Simon Woo
- Abstract summary: Recommendation systems are vulnerable to malicious user biases, such as fake reviews to promote or demote specific products.
Deep-learning collaborative filtering recommendation systems have shown to be more vulnerable to this bias.
We discuss the need for designing the robust recommendation system for fairness and stability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information has exploded on the Internet and mobile with the advent of the
big data era. In particular, recommendation systems are widely used to help
consumers who struggle to select the best products among such a large amount of
information. However, recommendation systems are vulnerable to malicious user
biases, such as fake reviews to promote or demote specific products, as well as
attacks that steal personal information. Such biases and attacks compromise the
fairness of the recommendation model and infringe the privacy of users and
systems by distorting data.Recently, deep-learning collaborative filtering
recommendation systems have shown to be more vulnerable to this bias. In this
position paper, we examine the effects of bias that cause various ethical and
social issues, and discuss the need for designing the robust recommendation
system for fairness and stability.
Related papers
- Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research [45.86892639035389]
This study explores fairness, bias, threats, and privacy in recommender systems.
It examines how algorithmic decisions can unintentionally reinforce biases or marginalize specific user and item groups.
The study suggests future research directions to improve recommender systems' robustness, fairness, and privacy.
arXiv Detail & Related papers (2024-09-19T11:00:35Z) - Transparency, Privacy, and Fairness in Recommender Systems [0.19036571490366497]
This habilitation elaborates on aspects related to (i) transparency and cognitive models, (ii) privacy and limited preference information, and (iii) fairness and popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-17T08:37:14Z) - Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Two-Stage Neural Contextual Bandits for Personalised News Recommendation [50.3750507789989]
Existing personalised news recommendation methods focus on exploiting user interests and ignores exploration in recommendation.
We build on contextual bandits recommendation strategies which naturally address the exploitation-exploration trade-off.
We use deep learning representations for users and news, and generalise the neural upper confidence bound (UCB) policies to generalised additive UCB and bilinear UCB.
arXiv Detail & Related papers (2022-06-26T12:07:56Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.