Gender In Gender Out: A Closer Look at User Attributes in Context-Aware
Recommendation
- URL: http://arxiv.org/abs/2207.14218v1
- Date: Thu, 28 Jul 2022 16:37:50 GMT
- Title: Gender In Gender Out: A Closer Look at User Attributes in Context-Aware
Recommendation
- Authors: Manel Slokom, \"Ozlem \"Ozg\"obek, Martha Larson
- Abstract summary: We show that user attributes do not always improve recommendation.
We investigate the amount of information about users that survives'' from the training data into the recommendation lists produced by the recommender.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies user attributes in light of current concerns in the
recommender system community: diversity, coverage, calibration, and data
minimization. In experiments with a conventional context-aware recommender
system that leverages side information, we show that user attributes do not
always improve recommendation. Then, we demonstrate that user attributes can
negatively impact diversity and coverage. Finally, we investigate the amount of
information about users that ``survives'' from the training data into the
recommendation lists produced by the recommender. This information is a weak
signal that could in the future be exploited for calibration or studied further
as a privacy leak.
Related papers
- The MovieLens Beliefs Dataset: Collecting Pre-Choice Data for Online Recommender Systems [0.0]
This paper introduces a method for collecting user beliefs about unexperienced items - a critical predictor of choice behavior.
We implement this method on the MovieLens platform, resulting in a rich dataset that combines user ratings, beliefs, and observed recommendations.
arXiv Detail & Related papers (2024-05-17T19:06:06Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Two-Stage Neural Contextual Bandits for Personalised News Recommendation [50.3750507789989]
Existing personalised news recommendation methods focus on exploiting user interests and ignores exploration in recommendation.
We build on contextual bandits recommendation strategies which naturally address the exploitation-exploration trade-off.
We use deep learning representations for users and news, and generalise the neural upper confidence bound (UCB) policies to generalised additive UCB and bilinear UCB.
arXiv Detail & Related papers (2022-06-26T12:07:56Z) - Unlearning Protected User Attributes in Recommendations with Adversarial
Training [10.268369743620159]
Collaborative filtering algorithms capture underlying consumption patterns, including the ones specific to particular demographics or protected information of users.
These encoded biases can influence the decision of a recommendation system towards further separation of the contents provided to various demographic subgroups.
In this work, we investigate the possibility and challenges of removing specific protected information of users from the learned interaction representations of a RS algorithm.
arXiv Detail & Related papers (2022-06-09T13:36:28Z) - CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation [36.22965012642248]
The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
arXiv Detail & Related papers (2022-02-08T16:42:03Z) - Recommending with Recommendations [1.1602089225841632]
Recommendation systems often draw upon sensitive user information in making predictions.
We show how to address this deficiency by basing a service's recommendation engine upon recommendations from other existing services.
In our setting, the user's (potentially sensitive) information belongs to a high-dimensional latent space.
arXiv Detail & Related papers (2021-12-02T04:30:15Z) - ELIXIR: Learning from User Feedback on Explanations to Improve
Recommender Models [26.11434743591804]
We devise a human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.
ELIXIR leverages feedback on pairs of recommendations and explanations to learn user-specific latent preference vectors.
Our framework is instantiated using generalized graph recommendation via Random Walk with Restart.
arXiv Detail & Related papers (2021-02-15T13:43:49Z) - Knowledge Transfer via Pre-training for Recommendation: A Review and
Prospect [89.91745908462417]
We show the benefits of pre-training to recommender systems through experiments.
We discuss several promising directions for future research for recommender systems with pre-training.
arXiv Detail & Related papers (2020-09-19T13:06:27Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.