RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework
with LLM Agents
- URL: http://arxiv.org/abs/2308.09904v2
- Date: Tue, 17 Oct 2023 11:48:10 GMT
- Title: RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework
with LLM Agents
- Authors: Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li,
Ning Gu
- Abstract summary: This research argues that addressing these issues is not solely the recommender systems' responsibility.
We introduce the RAH Recommender system, Assistant, and Human framework, emphasizing the alignment with user personalities.
Our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
- Score: 30.250555783628762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - CURATe: Benchmarking Personalised Alignment of Conversational AI Assistants [5.7605009639020315]
Assessment of ten leading models across five scenarios (each with 337 use cases)
Key failure modes include inappropriate weighing of conflicting preferences, sycophancy, a lack of attentiveness to critical user information within the context window, and inconsistent application of user-specific knowledge.
We propose research directions for embedding self-reflection capabilities, online user modelling, and dynamic risk assessment in AI assistants.
arXiv Detail & Related papers (2024-10-28T15:59:31Z) - Navigating User Experience of ChatGPT-based Conversational Recommender Systems: The Effects of Prompt Guidance and Recommendation Domain [15.179413273734761]
This study investigates the impact of prompt guidance (PG) and recommendation domain (RD) on the overall user experience of the system.
The findings reveal that PG can substantially enhance the system's explainability, adaptability, perceived ease of use, and transparency.
arXiv Detail & Related papers (2024-05-22T11:49:40Z) - Ensuring User-side Fairness in Dynamic Recommender Systems [37.20838165555877]
This paper presents the first principled study on ensuring user-side fairness in dynamic recommender systems.
We propose FAir Dynamic rEcommender (FADE), an end-to-end fine-tuning framework to dynamically ensure user-side fairness over time.
We show that FADE effectively and efficiently reduces performance disparities with little sacrifice in the overall recommendation performance.
arXiv Detail & Related papers (2023-08-29T22:03:17Z) - Editable User Profiles for Controllable Text Recommendation [66.00743968792275]
We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
arXiv Detail & Related papers (2023-04-09T14:52:18Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Offline Meta-level Model-based Reinforcement Learning Approach for
Cold-Start Recommendation [27.17948754183511]
Reinforcement learning has shown great promise in optimizing long-term user interest in recommender systems.
Existing RL-based recommendation methods need a large number of interactions for each user to learn a robust recommendation policy.
We propose a meta-level model-based reinforcement learning approach for fast user adaptation.
arXiv Detail & Related papers (2020-12-04T08:58:35Z) - Reinforcement Learning for Strategic Recommendations [32.73903761398027]
Strategic recommendations (SR) refer to the problem where an intelligent agent observes the sequential behaviors and activities of users and decides when and how to interact with them to optimize some long-term objectives, both for the user and the business.
At Adobe research, we have been implementing such systems for various use-cases, including points of interest recommendations, tutorial recommendations, next step guidance in multi-media editing software, and ad recommendation for optimizing lifetime value.
There are many research challenges when building these systems, such as modeling the sequential behavior of users, deciding when to intervene and offer recommendations without annoying the user, evaluating policies offline with
arXiv Detail & Related papers (2020-09-15T20:45:48Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.