Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals
- URL: http://arxiv.org/abs/2303.05648v2
- Date: Sun, 18 Jun 2023 03:40:40 GMT
- Title: Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals
- Authors: Qingming Li and H. Vicky Zhao
- Abstract summary: We identify three factors contributing to context effects: users' adaptive weights, the inter-item comparison, and display positions.
We propose a context-dependent preference model named Pacos as a unified framework for addressing three factors simultaneously.
Experimental results show that the proposed method has better performance than prior works in predicting users' choices.
- Score: 8.041047797530808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Choice problems refer to selecting the best choices from several items, and
learning users' preferences in choice problems is of great significance in
understanding the decision making mechanisms and providing personalized
services. Existing works typically assume that people evaluate items
independently. In practice, however, users' preferences depend on the market in
which items are placed, which is known as context effects; and the order of
users' preferences for two items may even be reversed, which is referred to
preference reversals. In this work, we identify three factors contributing to
context effects: users' adaptive weights, the inter-item comparison, and
display positions. We propose a context-dependent preference model named Pacos
as a unified framework for addressing three factors simultaneously, and
consider two design methods including an additive method with high
interpretability and an ANN-based method with high accuracy. We study the
conditions for preference reversals to occur and provide an theoretical proof
of the effectiveness of Pacos in addressing preference reversals. Experimental
results show that the proposed method has better performance than prior works
in predicting users' choices, and has great interpretability to help understand
the cause of preference reversals.
Related papers
- Active Preference-based Learning for Multi-dimensional Personalization [7.349038301460469]
Large language models (LLMs) have shown remarkable versatility across tasks, but aligning them with individual human preferences remains challenging.
We propose an active preference learning framework that uses binary feedback to estimate user preferences across multiple objectives.
We validate our approach through theoretical analysis and experiments on language generation tasks, demonstrating its feedback efficiency and effectiveness in personalizing model responses.
arXiv Detail & Related papers (2024-11-01T11:49:33Z) - Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - PREDICT: Preference Reasoning by Evaluating Decomposed preferences Inferred from Candidate Trajectories [3.0102456679931944]
This paper introduces PREDICT, a method designed to enhance the precision and adaptability of inferring preferences.
We evaluate PREDICT on two distinct environments: a gridworld setting and a new text-domain environment.
arXiv Detail & Related papers (2024-10-08T18:16:41Z) - Improving Context-Aware Preference Modeling for Language Models [62.32080105403915]
We consider the two-step preference modeling procedure that first resolves the under-specification by selecting a context, and then evaluates preference with respect to the chosen context.
We contribute context-conditioned preference datasets and experiments that investigate the ability of language models to evaluate context-specific preference.
arXiv Detail & Related papers (2024-07-20T16:05:17Z) - A First Look at Selection Bias in Preference Elicitation for Recommendation [64.44255178199846]
We study the effect of selection bias in preference elicitation on the resulting recommendations.
A big hurdle is the lack of any publicly available dataset that has preference elicitation interactions.
We propose a simulation of a topic-based preference elicitation process.
arXiv Detail & Related papers (2024-05-01T14:56:56Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Preference or Intent? Double Disentangled Collaborative Filtering [34.63377358888368]
In traditional collaborative filtering approaches, both intent and preference factors are usually entangled in the modeling process.
We propose a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for personalized recommendations.
arXiv Detail & Related papers (2023-05-18T16:13:41Z) - Probe: Learning Users' Personalized Projection Bias in Intertemporal
Choices [5.874142059884521]
In this work, we focus on two commonly observed biases: projection bias and the reference-point effect.
To address these biases, we propose a novel bias-embedded preference model called Probe.
The Probe incorporates a weight function to capture users' projection bias and a value function to account for the reference-point effect.
arXiv Detail & Related papers (2023-03-09T12:13:46Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.