Editable User Profiles for Controllable Text Recommendation
- URL: http://arxiv.org/abs/2304.04250v3
- Date: Mon, 16 Oct 2023 21:47:20 GMT
- Title: Editable User Profiles for Controllable Text Recommendation
- Authors: Sheshera Mysore, Mahmood Jasim, Andrew McCallum, Hamed Zamani
- Abstract summary: We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
- Score: 66.00743968792275
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Methods for making high-quality recommendations often rely on learning latent
representations from interaction data. These methods, while performant, do not
provide ready mechanisms for users to control the recommendation they receive.
Our work tackles this problem by proposing LACE, a novel concept value
bottleneck model for controllable text recommendations. LACE represents each
user with a succinct set of human-readable concepts through retrieval given
user-interacted documents and learns personalized representations of the
concepts based on user documents. This concept based user profile is then
leveraged to make recommendations. The design of our model affords control over
the recommendations through a number of intuitive interactions with a
transparent user profile. We first establish the quality of recommendations
obtained from LACE in an offline evaluation on three recommendation tasks
spanning six datasets in warm-start, cold-start, and zero-shot setups. Next, we
validate the controllability of LACE under simulated user interactions.
Finally, we implement LACE in an interactive controllable recommender system
and conduct a user study to demonstrate that users are able to improve the
quality of recommendations they receive through interactions with an editable
user profile.
Related papers
- Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework
with LLM Agents [30.250555783628762]
This research argues that addressing these issues is not solely the recommender systems' responsibility.
We introduce the RAH Recommender system, Assistant, and Human framework, emphasizing the alignment with user personalities.
Our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
arXiv Detail & Related papers (2023-08-19T04:46:01Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Offline Meta-level Model-based Reinforcement Learning Approach for
Cold-Start Recommendation [27.17948754183511]
Reinforcement learning has shown great promise in optimizing long-term user interest in recommender systems.
Existing RL-based recommendation methods need a large number of interactions for each user to learn a robust recommendation policy.
We propose a meta-level model-based reinforcement learning approach for fast user adaptation.
arXiv Detail & Related papers (2020-12-04T08:58:35Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.