Toward a Better Understanding of Loss Functions for Collaborative
Filtering
- URL: http://arxiv.org/abs/2308.06091v2
- Date: Mon, 30 Oct 2023 10:33:49 GMT
- Title: Toward a Better Understanding of Loss Functions for Collaborative
Filtering
- Authors: Seongmin Park, Mincheol Yoon, Jae-woong Lee, Hogun Park, Jongwuk Lee
- Abstract summary: Collaborative filtering (CF) is a pivotal technique in modern recommender systems.
Recent work shows that simply reformulating the loss functions can achieve significant performance gains.
We propose a novel loss function that improves the design of alignment and uniformity.
- Score: 13.581193492311805
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Collaborative filtering (CF) is a pivotal technique in modern recommender
systems. The learning process of CF models typically consists of three
components: interaction encoder, loss function, and negative sampling. Although
many existing studies have proposed various CF models to design sophisticated
interaction encoders, recent work shows that simply reformulating the loss
functions can achieve significant performance gains. This paper delves into
analyzing the relationship among existing loss functions. Our mathematical
analysis reveals that the previous loss functions can be interpreted as
alignment and uniformity functions: (i) the alignment matches user and item
representations, and (ii) the uniformity disperses user and item distributions.
Inspired by this analysis, we propose a novel loss function that improves the
design of alignment and uniformity considering the unique patterns of datasets
called Margin-aware Alignment and Weighted Uniformity (MAWU). The key novelty
of MAWU is two-fold: (i) margin-aware alignment (MA) mitigates
user/item-specific popularity biases, and (ii) weighted uniformity (WU) adjusts
the significance between user and item uniformities to reflect the inherent
characteristics of datasets. Extensive experimental results show that MF and
LightGCN equipped with MAWU are comparable or superior to state-of-the-art CF
models with various loss functions on three public datasets.
Related papers
- PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion [80.79938369319152]
We design a new pipeline coined PCF-Lift based on our Probabilis-tic Contrastive Fusion (PCF)
Our PCF-lift not only significantly outperforms the state-of-the-art methods on widely used benchmarks including the ScanNet dataset and the Messy Room dataset (4.4% improvement of scene-level PQ)
arXiv Detail & Related papers (2024-10-14T16:06:59Z) - A Framework for Fine-Tuning LLMs using Heterogeneous Feedback [69.51729152929413]
We present a framework for fine-tuning large language models (LLMs) using heterogeneous feedback.
First, we combine the heterogeneous feedback data into a single supervision format, compatible with methods like SFT and RLHF.
Next, given this unified feedback dataset, we extract a high-quality and diverse subset to obtain performance increases.
arXiv Detail & Related papers (2024-08-05T23:20:32Z) - Fully Differentiable Correlation-driven 2D/3D Registration for X-ray to CT Image Fusion [3.868072865207522]
Image-based rigid 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions.
We propose a novel fully differentiable correlation-driven network using a dual-branch CNN-transformer encoder.
A correlation-driven loss is proposed for low-frequency feature and high-frequency feature decomposition based on embedded information.
arXiv Detail & Related papers (2024-02-04T14:12:51Z) - NPEFF: Non-Negative Per-Example Fisher Factorization [52.44573961263344]
We introduce a novel interpretability method called NPEFF that is readily applicable to any end-to-end differentiable model.
We demonstrate that NPEFF has interpretable tunings through experiments on language and vision models.
arXiv Detail & Related papers (2023-10-07T02:02:45Z) - uCTRL: Unbiased Contrastive Representation Learning via Alignment and
Uniformity for Collaborative Filtering [6.663503238373593]
Collaborative filtering (CF) models tend to yield recommendation lists with popularity bias.
We propose Unbiased ConTrastive Representation Learning (uCTRL) to mitigate this problem.
We also devise a novel IPW estimation method that removes the bias of both users and items.
arXiv Detail & Related papers (2023-05-22T06:55:38Z) - Con$^{2}$DA: Simplifying Semi-supervised Domain Adaptation by Learning
Consistent and Contrastive Feature Representations [1.2891210250935146]
Con$2$DA is a framework that extends recent advances in semi-supervised learning to the semi-supervised domain adaptation problem.
Our framework generates pairs of associated samples by performing data transformations to a given input.
We use different loss functions to enforce consistency between the feature representations of associated data pairs of samples.
arXiv Detail & Related papers (2022-04-04T15:05:45Z) - C$^{4}$Net: Contextual Compression and Complementary Combination Network
for Salient Object Detection [0.0]
We show that feature concatenation works better than other combination methods like multiplication or addition.
Also, joint feature learning gives better results, because of the information sharing during their processing.
arXiv Detail & Related papers (2021-10-22T16:14:10Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - BCFNet: A Balanced Collaborative Filtering Network with Attention
Mechanism [106.43103176833371]
Collaborative Filtering (CF) based recommendation methods have been widely studied.
We propose a novel recommendation model named Balanced Collaborative Filtering Network (BCFNet)
In addition, an attention mechanism is designed to better capture the hidden information within implicit feedback and strengthen the learning ability of the neural network.
arXiv Detail & Related papers (2021-03-10T14:59:23Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.