Bootstrapping User and Item Representations for One-Class Collaborative
Filtering
- URL: http://arxiv.org/abs/2105.06323v1
- Date: Thu, 13 May 2021 14:24:13 GMT
- Title: Bootstrapping User and Item Representations for One-Class Collaborative
Filtering
- Authors: Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, Hwanjo Yu
- Abstract summary: One-class collaborative filtering (OCCF) is to identify user-item pairs that are positively-related but have not been interacted yet.
This paper proposes a novel OCCF framework, named as BUIR, which does not require negative sampling.
- Score: 24.30834981766022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of one-class collaborative filtering (OCCF) is to identify the
user-item pairs that are positively-related but have not been interacted yet,
where only a small portion of positive user-item interactions (e.g., users'
implicit feedback) are observed. For discriminative modeling between positive
and negative interactions, most previous work relied on negative sampling to
some extent, which refers to considering unobserved user-item pairs as
negative, as actual negative ones are unknown. However, the negative sampling
scheme has critical limitations because it may choose "positive but unobserved"
pairs as negative. This paper proposes a novel OCCF framework, named as BUIR,
which does not require negative sampling. To make the representations of
positively-related users and items similar to each other while avoiding a
collapsed solution, BUIR adopts two distinct encoder networks that learn from
each other; the first encoder is trained to predict the output of the second
encoder as its target, while the second encoder provides the consistent targets
by slowly approximating the first encoder. In addition, BUIR effectively
alleviates the data sparsity issue of OCCF, by applying stochastic data
augmentation to encoder inputs. Based on the neighborhood information of users
and items, BUIR randomly generates the augmented views of each positive
interaction each time it encodes, then further trains the model by this
self-supervision. Our extensive experiments demonstrate that BUIR consistently
and significantly outperforms all baseline methods by a large margin especially
for much sparse datasets in which any assumptions about negative interactions
are less valid.
Related papers
- Towards Unified Modeling for Positive and Negative Preferences in
Sign-Aware Recommendation [13.300975621769396]
We propose a novel textbfLight textbfSigned textbfGraph Convolution Network specifically for textbfRecommendation (textbfLSGRec)
For the negative preferences within high-order heterogeneous interactions, first-order negative preferences are captured by the negative links.
recommendation results are generated based on positive preferences and optimized with negative ones.
arXiv Detail & Related papers (2024-03-13T05:00:42Z) - Topology-aware Debiased Self-supervised Graph Learning for
Recommendation [6.893289671937124]
We propose Topology-aware De Self-supervised Graph Learning ( TDSGL) for recommendation.
TDSGL constructs contrastive pairs according to the semantic similarity between users (items)
Our results show that the proposed model outperforms the state-of-the-art models significantly on three public datasets.
arXiv Detail & Related papers (2023-10-24T14:16:19Z) - Cluster-guided Contrastive Graph Clustering Network [53.16233290797777]
We propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC)
We construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks.
To construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples.
arXiv Detail & Related papers (2023-01-03T13:42:38Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Collaborative Reflection-Augmented Autoencoder Network for Recommender
Systems [23.480069921831344]
We develop a Collaborative Reflection-Augmented Autoencoder Network (CRANet)
CRANet is capable of exploring transferable knowledge from observed and unobserved user-item interactions.
We experimentally validate CRANet on four diverse benchmark datasets corresponding to two recommendation tasks.
arXiv Detail & Related papers (2022-01-10T04:36:15Z) - SelfCF: A Simple Framework for Self-supervised Collaborative Filtering [72.68215241599509]
Collaborative filtering (CF) is widely used to learn informative latent representations of users and items from observed interactions.
We propose a self-supervised collaborative filtering framework (SelfCF) that is specially designed for recommender scenario with implicit feedback.
We show that SelfCF can boost up the accuracy by up to 17.79% on average, compared with a self-supervised framework BUIR.
arXiv Detail & Related papers (2021-07-07T05:21:12Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Simplify and Robustify Negative Sampling for Implicit Collaborative
Filtering [42.832851785261894]
In this paper, we first provide a novel understanding of negative instances by empirically observing that only a few instances are potentially important for model learning.
We then tackle the untouched false negative problem by favouring high-variance samples stored in memory.
Empirical results on two synthetic datasets and three real-world datasets demonstrate both robustness and superiorities of our negative sampling method.
arXiv Detail & Related papers (2020-09-07T19:08:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.