Collaborative filtering to capture AI user's preferences as norms
- URL: http://arxiv.org/abs/2308.02542v2
- Date: Thu, 10 Aug 2023 20:55:47 GMT
- Title: Collaborative filtering to capture AI user's preferences as norms
- Authors: Marc Serramia, Natalia Criado, Michael Luck
- Abstract summary: Current methods require too much user involvement and fail to capture true preferences.
We argue that a new perspective is required when constructing norms.
Inspired by recommender systems, we believe that collaborative filtering can offer a suitable approach.
- Score: 0.4640835690336652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Customising AI technologies to each user's preferences is fundamental to them
functioning well. Unfortunately, current methods require too much user
involvement and fail to capture their true preferences. In fact, to avoid the
nuisance of manually setting preferences, users usually accept the default
settings even if these do not conform to their true preferences. Norms can be
useful to regulate behaviour and ensure it adheres to user preferences but,
while the literature has thoroughly studied norms, most proposals take a formal
perspective. Indeed, while there has been some research on constructing norms
to capture a user's privacy preferences, these methods rely on domain knowledge
which, in the case of AI technologies, is difficult to obtain and maintain. We
argue that a new perspective is required when constructing norms, which is to
exploit the large amount of preference information readily available from whole
systems of users. Inspired by recommender systems, we believe that
collaborative filtering can offer a suitable approach to identifying a user's
norm preferences without excessive user involvement.
Related papers
- ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Separating and Learning Latent Confounders to Enhancing User Preferences Modeling [6.0853798070913845]
We propose a novel framework, Separating and Learning Latent Confounders For Recommendation (SLFR)
SLFR obtains the representation of unmeasured confounders to identify the counterfactual feedback by disentangling user preferences and unmeasured confounders.
Experiments in five real-world datasets validate the advantages of our method.
arXiv Detail & Related papers (2023-11-02T08:42:50Z) - Predicting Privacy Preferences for Smart Devices as Norms [14.686788596611251]
We present a collaborative filtering approach to predict user preferences as norms.
Using a dataset of privacy preferences of smart assistant users, we test the accuracy of our predictions.
arXiv Detail & Related papers (2023-02-21T13:07:30Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Preference Dynamics Under Personalized Recommendations [12.89628003097857]
We show whether some phenomenon akin to polarization occurs when users receive personalized content recommendations.
A more interesting objective is to understand under what conditions a recommendation algorithm can ensure stationarity of user's preferences.
arXiv Detail & Related papers (2022-05-25T19:29:53Z) - Estimating and Penalizing Induced Preference Shifts in Recommender
Systems [10.052697877248601]
We argue that system designers should: estimate the shifts a recommender would induce; evaluate whether such shifts would be undesirable; and even actively optimize to avoid problematic shifts.
We do this by using historical user interaction data to train predictive user model which implicitly contains their preference dynamics.
In simulated experiments, we show that our learned preference dynamics model is effective in estimating user preferences and how they would respond to new recommenders.
arXiv Detail & Related papers (2022-04-25T21:04:46Z) - Leveraging Privacy Profiles to Empower Users in the Digital Society [7.350403786094707]
Privacy and ethics of citizens are at the core of the concerns raised by our increasingly digital society.
We focus on the privacy dimension and contribute a step in the above direction through an empirical study on an existing dataset collected from the fitness domain.
The results reveal that a compact set of semantic-driven questions helps distinguish users better than a complex domain-dependent one.
arXiv Detail & Related papers (2022-04-01T15:31:50Z) - The Stereotyping Problem in Collaboratively Filtered Recommender Systems [77.56225819389773]
We show that matrix factorization-based collaborative filtering algorithms induce a kind of stereotyping.
If preferences for a textitset of items are anti-correlated in the general user population, then those items may not be recommended together to a user.
We propose an alternative modelling fix, which is designed to capture the diverse multiple interests of each user.
arXiv Detail & Related papers (2021-06-23T18:37:47Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.