Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals
- URL: http://arxiv.org/abs/2303.05648v2
- Date: Sun, 18 Jun 2023 03:40:40 GMT
- Title: Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals
- Authors: Qingming Li and H. Vicky Zhao
- Abstract summary: We identify three factors contributing to context effects: users' adaptive weights, the inter-item comparison, and display positions.
We propose a context-dependent preference model named Pacos as a unified framework for addressing three factors simultaneously.
Experimental results show that the proposed method has better performance than prior works in predicting users' choices.
- Score: 8.041047797530808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Choice problems refer to selecting the best choices from several items, and
learning users' preferences in choice problems is of great significance in
understanding the decision making mechanisms and providing personalized
services. Existing works typically assume that people evaluate items
independently. In practice, however, users' preferences depend on the market in
which items are placed, which is known as context effects; and the order of
users' preferences for two items may even be reversed, which is referred to
preference reversals. In this work, we identify three factors contributing to
context effects: users' adaptive weights, the inter-item comparison, and
display positions. We propose a context-dependent preference model named Pacos
as a unified framework for addressing three factors simultaneously, and
consider two design methods including an additive method with high
interpretability and an ANN-based method with high accuracy. We study the
conditions for preference reversals to occur and provide an theoretical proof
of the effectiveness of Pacos in addressing preference reversals. Experimental
results show that the proposed method has better performance than prior works
in predicting users' choices, and has great interpretability to help understand
the cause of preference reversals.
Related papers
- Improving Context-Aware Preference Modeling for Language Models [62.32080105403915]
We consider the two-step preference modeling procedure that first resolves the under-specification by selecting a context, and then evaluates preference with respect to the chosen context.
We contribute context-conditioned preference datasets and experiments that investigate the ability of language models to evaluate context-specific preference.
arXiv Detail & Related papers (2024-07-20T16:05:17Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization [105.3612692153615]
A common technique for aligning large language models (LLMs) relies on acquiring human preferences.
We propose a new axis that is based on eliciting preferences jointly over the instruction-response pairs.
We find that joint preferences over instruction and response pairs can significantly enhance the alignment of LLMs.
arXiv Detail & Related papers (2024-03-31T02:05:40Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Preference or Intent? Double Disentangled Collaborative Filtering [34.63377358888368]
In traditional collaborative filtering approaches, both intent and preference factors are usually entangled in the modeling process.
We propose a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for personalized recommendations.
arXiv Detail & Related papers (2023-05-18T16:13:41Z) - Probe: Learning Users' Personalized Projection Bias in Intertemporal
Choices [5.874142059884521]
In this work, we focus on two commonly observed biases: projection bias and the reference-point effect.
To address these biases, we propose a novel bias-embedded preference model called Probe.
The Probe incorporates a weight function to capture users' projection bias and a value function to account for the reference-point effect.
arXiv Detail & Related papers (2023-03-09T12:13:46Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Disentangled Representation for Diversified Recommendations [41.477162048806434]
Accuracy and diversity have long been considered to be two conflicting goals for recommendations.
We propose a general diversification framework agnostic to the choice of recommendation algorithms.
Our solution disentangles the learnt user representation in the recommendation module into category-independent and category-dependent components.
arXiv Detail & Related papers (2023-01-13T11:47:10Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Learning Interpretable Feature Context Effects in Discrete Choice [40.91593765662774]
We provide a method for the automatic discovery of a broad class of context effects from observed choice data.
Our models are easier to train and more flexible than existing models and also yield intuitive, interpretable, and statistically testable context effects.
We identify new context effects in widely used choice datasets and provide the first analysis of choice set context effects in social network growth.
arXiv Detail & Related papers (2020-09-07T20:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.