Towards Psychologically-Grounded Dynamic Preference Models
- URL: http://arxiv.org/abs/2208.01534v1
- Date: Mon, 1 Aug 2022 16:53:58 GMT
- Title: Towards Psychologically-Grounded Dynamic Preference Models
- Authors: Mihaela Curmei, Andreas Haupt, Benjamin Recht, Dylan Hadfield-Menell
- Abstract summary: We argue that modeling the influence of recommendations on people's preferences must be grounded in psychologically plausible models.
We demonstrate this method with models that capture three classic effects from the psychology literature: Mere-Exposure, Operant Conditioning, and Hedonic Adaptation.
- Score: 24.29415641848088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing recommendation systems that serve content aligned with time varying
preferences requires proper accounting of the feedback effects of
recommendations on human behavior and psychological condition. We argue that
modeling the influence of recommendations on people's preferences must be
grounded in psychologically plausible models. We contribute a methodology for
developing grounded dynamic preference models. We demonstrate this method with
models that capture three classic effects from the psychology literature:
Mere-Exposure, Operant Conditioning, and Hedonic Adaptation. We conduct
simulation-based studies to show that the psychological models manifest
distinct behaviors that can inform system design. Our study has two direct
implications for dynamic user modeling in recommendation systems. First, the
methodology we outline is broadly applicable for psychologically grounding
dynamic preference models. It allows us to critique recent contributions based
on their limited discussion of psychological foundation and their implausible
predictions. Second, we discuss implications of dynamic preference models for
recommendation systems evaluation and design. In an example, we show that
engagement and diversity metrics may be unable to capture desirable
recommendation system performance.
Related papers
- Modeling Bias Evolution in Fashion Recommender Systems: A System Dynamics Approach [0.0]
Bias in recommender systems not only distorts user experience but also perpetuates and amplifies existing societal stereotypes.<n>This study employs a dynamic modeling approach to scrutinize the mechanisms of bias activation and reinforcement within Fashion Recommender Systems.
arXiv Detail & Related papers (2025-09-27T20:16:29Z) - When Algorithms Mirror Minds: A Confirmation-Aware Social Dynamic Model of Echo Chamber and Homogenization Traps [19.047790323760935]
We study the emergence and drivers of echo chambers and user homogenization, as well as actionable guidelines for human-centered recommender design.<n>Our findings provide both theoretical and empirical insights into the emergence and drivers of echo chambers and user homogenization, as well as actionable guidelines for human-centered recommender design.
arXiv Detail & Related papers (2025-08-15T14:55:55Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Slow Thinking for Sequential Recommendation [88.46598279655575]
We present a novel slow thinking recommendation model, named STREAM-Rec.
Our approach is capable of analyzing historical user behavior, generating a multi-step, deliberative reasoning process, and delivering personalized recommendations.
In particular, we focus on two key challenges: (1) identifying the suitable reasoning patterns in recommender systems, and (2) exploring how to effectively stimulate the reasoning capabilities of traditional recommenders.
arXiv Detail & Related papers (2025-04-13T15:53:30Z) - Aligning Visual Contrastive learning models via Preference Optimization [0.9438963196770565]
This paper introduces a novel method for training contrastive learning models using Preference Optimization (PO) to break down complex concepts.
Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task.
In particular, we focus on enhancing model robustness against typographic attacks, commonly seen in contrastive models like CLIP.
We further apply our method to disentangle gender understanding and mitigate gender biases, offering a more nuanced control over these sensitive attributes.
arXiv Detail & Related papers (2024-11-12T08:14:54Z) - Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms [91.19304518033144]
We aim to align vision models with human aesthetic standards in a retrieval system.
We propose a preference-based reinforcement learning method that fine-tunes the vision models to better align the vision models with human aesthetics.
arXiv Detail & Related papers (2024-06-13T17:59:20Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - Understanding User Intent Modeling for Conversational Recommender
Systems: A Systematic Literature Review [1.3630870408844922]
We conducted a systematic literature review to gather data on models typically employed in designing conversational recommender systems.
We developed a decision model to assist researchers in selecting the most suitable models for their systems.
Our study contributes practical insights and a comprehensive understanding of user intent modeling, empowering the development of more effective and personalized conversational recommender systems.
arXiv Detail & Related papers (2023-08-05T22:50:21Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - The drivers of online polarization: fitting models to data [0.0]
echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms.
Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers.
We provide a method to numerically compare the opinion distributions obtained from simulations with those measured on social media.
arXiv Detail & Related papers (2022-05-31T17:00:41Z) - Understanding Longitudinal Dynamics of Recommender Systems with
Agent-Based Modeling and Simulation [7.98348797868119]
Agent-Based Modeling and Simulation (ABM) techniques can be used to study such important longitudinal dynamics of recommender systems.
We provide an overview of the ABM principles, outline a simulation framework for recommender systems based on the literature, and discuss various practical research questions that can be addressed with such an ABM-based simulation framework.
arXiv Detail & Related papers (2021-08-25T06:28:19Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - A Survey on Neural Recommendation: From Collaborative Filtering to
Content and Context Enriched Recommendation [70.69134448863483]
Research in recommendation has shifted to inventing new recommender models based on neural networks.
In recent years, we have witnessed significant progress in developing neural recommender models.
arXiv Detail & Related papers (2021-04-27T08:03:52Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.