Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization
- URL: http://arxiv.org/abs/2006.03481v6
- Date: Fri, 4 Mar 2022 12:51:36 GMT
- Title: Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization
- Authors: Fernando Ortega, Ra\'ul Lara-Cabrera, \'Angel Gonz\'alez-Prieto,
Jes\'us Bobadilla
- Abstract summary: This paper proposes Bernoulli Matrix Factorization (BeMF) to provide both prediction values and reliability values.
BeMF acts on model-based collaborative filtering rather than on memory-based filtering.
The more reliable a prediction is, the less liable it is to be wrong.
- Score: 63.732639864601914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beyond accuracy, quality measures are gaining importance in modern
recommender systems, with reliability being one of the most important
indicators in the context of collaborative filtering. This paper proposes
Bernoulli Matrix Factorization (BeMF), which is a matrix factorization model,
to provide both prediction values and reliability values. BeMF is a very
innovative approach from several perspectives: a) it acts on model-based
collaborative filtering rather than on memory-based filtering, b) it does not
use external methods or extended architectures, such as existing solutions, to
provide reliability, c) it is based on a classification-based model instead of
traditional regression-based models, and d) matrix factorization formalism is
supported by the Bernoulli distribution to exploit the binary nature of the
designed classification model. The experimental results show that the more
reliable a prediction is, the less liable it is to be wrong: recommendation
quality improves after the most reliable predictions are selected.
State-of-the-art quality measures for reliability have been tested, which shows
that BeMF outperforms previous baseline methods and models.
Related papers
- Self-Evolutionary Large Language Models through Uncertainty-Enhanced Preference Optimization [9.618391485742968]
Iterative preference optimization has recently become one of the de-facto training paradigms for large language models (LLMs)
We present an uncertainty-enhanced textbfPreference textbfOptimization framework to make the LLM self-evolve with reliable feedback.
Our framework substantially alleviates the noisy problem and improves the performance of iterative preference optimization.
arXiv Detail & Related papers (2024-09-17T14:05:58Z) - A Framework for Strategic Discovery of Credible Neural Network Surrogate Models under Uncertainty [0.0]
This study presents the Occam Plausibility Algorithm for surrogate models (OPAL-surrogate)
OPAL-surrogate provides a systematic framework to uncover predictive neural network-based surrogate models.
It balances the trade-off between model complexity, accuracy, and prediction uncertainty.
arXiv Detail & Related papers (2024-03-13T18:45:51Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Calibration-Aware Bayesian Learning [37.82259435084825]
This paper proposes an integrated framework, referred to as calibration-aware Bayesian neural networks (CA-BNNs)
It applies both data-dependent or data-independent regularizers while optimizing over a variational distribution as in Bayesian learning.
Numerical results validate the advantages of the proposed approach in terms of expected calibration error (ECE) and reliability diagrams.
arXiv Detail & Related papers (2023-05-12T14:19:15Z) - Restricted Bernoulli Matrix Factorization: Balancing the trade-off
between prediction accuracy and coverage in classification based
collaborative filtering [45.335821132209766]
We propose Restricted Bernoulli Matrix Factorization (ResBeMF) to enhance the performance of classification-based collaborative filtering.
The proposed model provides a good balance in terms of the quality measures used compared to other recommendation models.
arXiv Detail & Related papers (2022-10-05T13:48:19Z) - Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
Benchmarking Robustness and Simple Baselines [65.0803400763215]
This work critically examines how adversarial robustness guarantees change when state-of-the-art certifiably robust models encounter out-of-distribution data.
We propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.
arXiv Detail & Related papers (2021-12-01T17:11:22Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.