Top-N Recommendation with Counterfactual User Preference Simulation
- URL: http://arxiv.org/abs/2109.02444v1
- Date: Thu, 2 Sep 2021 14:28:46 GMT
- Title: Top-N Recommendation with Counterfactual User Preference Simulation
- Authors: Mengyue Yang, Quanyu Dai, Zhenhua Dong, Xu Chen, Xiuqiang He, Jun Wang
- Abstract summary: Top-N recommendation, which aims to learn user ranking-based preference, has long been a fundamental problem in a wide range of applications.
In this paper, we propose to reformulate the recommendation task within the causal inference framework to handle the data scarce problem.
- Score: 26.597102553608348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Top-N recommendation, which aims to learn user ranking-based preference, has
long been a fundamental problem in a wide range of applications. Traditional
models usually motivate themselves by designing complex or tailored
architectures based on different assumptions. However, the training data of
recommender system can be extremely sparse and imbalanced, which poses great
challenges for boosting the recommendation performance. To alleviate this
problem, in this paper, we propose to reformulate the recommendation task
within the causal inference framework, which enables us to counterfactually
simulate user ranking-based preferences to handle the data scarce problem. The
core of our model lies in the counterfactual question: "what would be the
user's decision if the recommended items had been different?". To answer this
question, we firstly formulate the recommendation process with a series of
structural equation models (SEMs), whose parameters are optimized based on the
observed data. Then, we actively indicate many recommendation lists (called
intervention in the causal inference terminology) which are not recorded in the
dataset, and simulate user feedback according to the learned SEMs for
generating new training samples. Instead of randomly intervening on the
recommendation list, we design a learning-based method to discover more
informative training samples. Considering that the learned SEMs can be not
perfect, we, at last, theoretically analyze the relation between the number of
generated samples and the model prediction error, based on which a heuristic
method is designed to control the negative effect brought by the prediction
error. Extensive experiments are conducted based on both synthetic and
real-world datasets to demonstrate the effectiveness of our framework.
Related papers
- Preference Optimization as Probabilistic Inference [21.95277469346728]
We propose a method that can leverage unpaired preferred or dis-preferred examples, and works even when only one type of feedback is available.
This flexibility allows us to apply it in scenarios with varying forms of feedback and models, including training generative language models.
arXiv Detail & Related papers (2024-10-05T14:04:03Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - CSRec: Rethinking Sequential Recommendation from A Causal Perspective [25.69446083970207]
The essence of sequential recommender systems (RecSys) lies in understanding how users make decisions.
We propose a novel formulation of sequential recommendation, termed Causal Sequential Recommendation (CSRec)
CSRec aims to predict the probability of a recommended item's acceptance within a sequential context and backtrack how current decisions are made.
arXiv Detail & Related papers (2024-08-23T23:19:14Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.