Probabilistic Metric Learning with Adaptive Margin for Top-K
Recommendation
- URL: http://arxiv.org/abs/2101.04849v1
- Date: Wed, 13 Jan 2021 03:11:04 GMT
- Title: Probabilistic Metric Learning with Adaptive Margin for Top-K
Recommendation
- Authors: Chen Ma, Liheng Ma, Yingxue Zhang, Ruiming Tang, Xue Liu and Mark
Coates
- Abstract summary: We develop a distance-based recommendation model with several novel aspects.
The proposed model outperforms the best existing models by 4-22% in terms of recall@K on Top-K recommendation.
- Score: 40.80017379274105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized recommender systems are playing an increasingly important role
as more content and services become available and users struggle to identify
what might interest them. Although matrix factorization and deep learning based
methods have proved effective in user preference modeling, they violate the
triangle inequality and fail to capture fine-grained preference information. To
tackle this, we develop a distance-based recommendation model with several
novel aspects: (i) each user and item are parameterized by Gaussian
distributions to capture the learning uncertainties; (ii) an adaptive margin
generation scheme is proposed to generate the margins regarding different
training triplets; (iii) explicit user-user/item-item similarity modeling is
incorporated in the objective function. The Wasserstein distance is employed to
determine preferences because it obeys the triangle inequality and can measure
the distance between probabilistic distributions. Via a comparison using five
real-world datasets with state-of-the-art methods, the proposed model
outperforms the best existing models by 4-22% in terms of recall@K on Top-K
recommendation.
Related papers
- Preference Diffusion for Recommendation [50.8692409346126]
We propose PreferDiff, a tailored optimization objective for DM-based recommenders.
PreferDiff transforms BPR into a log-likelihood ranking objective to better capture user preferences.
It is the first personalized ranking loss designed specifically for DM-based recommenders.
arXiv Detail & Related papers (2024-10-17T01:02:04Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Mining Stable Preferences: Adaptive Modality Decorrelation for
Multimedia Recommendation [23.667430143035787]
We propose a novel MOdality DEcorrelating STable learning framework, MODEST for brevity, to learn users' stable preference.
Inspired by sample re-weighting techniques, the proposed method aims to estimate a weight for each item, such that the features from different modalities in the weighted distribution are decorrelated.
Our method could be served as a play-and-plug module for existing multimedia recommendation backbones.
arXiv Detail & Related papers (2023-06-25T09:09:11Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.