Preference or Intent? Double Disentangled Collaborative Filtering
- URL: http://arxiv.org/abs/2305.11084v1
- Date: Thu, 18 May 2023 16:13:41 GMT
- Title: Preference or Intent? Double Disentangled Collaborative Filtering
- Authors: Chao Wang, Hengshu Zhu, Dazhong Shen, Wei wu, Hui Xiong
- Abstract summary: In traditional collaborative filtering approaches, both intent and preference factors are usually entangled in the modeling process.
We propose a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for personalized recommendations.
- Score: 34.63377358888368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People usually have different intents for choosing items, while their
preferences under the same intent may also different. In traditional
collaborative filtering approaches, both intent and preference factors are
usually entangled in the modeling process, which significantly limits the
robustness and interpretability of recommendation performances. For example,
the low-rating items are always treated as negative feedback while they
actually could provide positive information about user intent. To this end, in
this paper, we propose a two-fold representation learning approach, namely
Double Disentangled Collaborative Filtering (DDCF), for personalized
recommendations. The first-level disentanglement is for separating the
influence factors of intent and preference, while the second-level
disentanglement is performed to build independent sparse preference
representations under individual intent with limited computational complexity.
Specifically, we employ two variational autoencoder networks, intent
recognition network and preference decomposition network, to learn the intent
and preference factors, respectively. In this way, the low-rating items will be
treated as positive samples for modeling intents while the negative samples for
modeling preferences. Finally, extensive experiments on three real-world
datasets and four evaluation metrics clearly validate the effectiveness and the
interpretability of DDCF.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Towards Explainable Collaborative Filtering with Taste Clusters Learning [43.4512681951459]
Collaborative Filtering (CF) is a widely used and effective technique for recommender systems.
Adding explainability to recommendation models can not only increase trust in the decisionmaking process, but also have multiple benefits.
We propose a neat and effective Explainable Collaborative Filtering (ECF) model that leverages interpretable cluster learning.
arXiv Detail & Related papers (2023-04-27T03:08:15Z) - Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals [8.041047797530808]
We identify three factors contributing to context effects: users' adaptive weights, the inter-item comparison, and display positions.
We propose a context-dependent preference model named Pacos as a unified framework for addressing three factors simultaneously.
Experimental results show that the proposed method has better performance than prior works in predicting users' choices.
arXiv Detail & Related papers (2023-03-10T01:49:56Z) - SelfCF: A Simple Framework for Self-supervised Collaborative Filtering [72.68215241599509]
Collaborative filtering (CF) is widely used to learn informative latent representations of users and items from observed interactions.
We propose a self-supervised collaborative filtering framework (SelfCF) that is specially designed for recommender scenario with implicit feedback.
We show that SelfCF can boost up the accuracy by up to 17.79% on average, compared with a self-supervised framework BUIR.
arXiv Detail & Related papers (2021-07-07T05:21:12Z) - Probabilistic and Variational Recommendation Denoising [56.879165033014026]
Learning from implicit feedback is one of the most common cases in the application of recommender systems.
We propose probabilistic and variational recommendation denoising for implicit feedback.
We employ the proposed DPI and DVAE on four state-of-the-art recommendation models and conduct experiments on three datasets.
arXiv Detail & Related papers (2021-05-20T08:59:44Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Bootstrapping User and Item Representations for One-Class Collaborative
Filtering [24.30834981766022]
One-class collaborative filtering (OCCF) is to identify user-item pairs that are positively-related but have not been interacted yet.
This paper proposes a novel OCCF framework, named as BUIR, which does not require negative sampling.
arXiv Detail & Related papers (2021-05-13T14:24:13Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.