Incorporating Recklessness to Collaborative Filtering based Recommender Systems
- URL: http://arxiv.org/abs/2308.02058v3
- Date: Tue, 21 May 2024 10:22:50 GMT
- Title: Incorporating Recklessness to Collaborative Filtering based Recommender Systems
- Authors: Diego Pérez-López, Fernando Ortega, Ángel González-Prieto, Jorge Dueñas-Lerín,
- Abstract summary: recklessness takes into account the variance of the output probability distribution of the predicted ratings.
Experimental results demonstrate that recklessness not only allows for risk regulation but also improves the quantity and quality of predictions.
- Score: 42.956580283193176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are intrinsically tied to a reliability/coverage dilemma: The more reliable we desire the forecasts, the more conservative the decision will be and thus, the fewer items will be recommended. This causes a detriment to the predictive capability of the system, as it is only able to estimate potential interest in items for which there is a consensus in their evaluation, rather than being able to estimate potential interest in any item. In this paper, we propose the inclusion of a new term in the learning process of matrix factorization-based recommender systems, called recklessness, that takes into account the variance of the output probability distribution of the predicted ratings. In this way, gauging this recklessness measure we can force more spiky output distribution, enabling the control of the risk level desired when making decisions about the reliability of a prediction. Experimental results demonstrate that recklessness not only allows for risk regulation but also improves the quantity and quality of predictions provided by the recommender system.
Related papers
- Are Recommenders Self-Aware? Label-Free Recommendation Performance Estimation via Model Uncertainty [27.396301623717072]
This paper investigates the recommender's self-awareness by quantifying its uncertainty.<n>We propose a method, probability-based List Distribution uncertainty (LiDu)<n>LiDu measures uncertainty by determining the probability that a recommender will generate a certain ranking list.
arXiv Detail & Related papers (2025-07-31T03:04:34Z) - Truthful Elicitation of Imprecise Forecasts [11.153198087930756]
We propose a framework for scoring imprecise forecasts -- forecasts given as a set of beliefs.
We show that truthful elicitation of imprecise forecasts is achievable using proper scoring rules randomized over the aggregation procedure.
arXiv Detail & Related papers (2025-03-20T17:53:35Z) - Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Measuring Recency Bias In Sequential Recommendation Systems [4.797371814812293]
Recency bias in a sequential recommendation system refers to the overly high emphasis placed on recent items within a user session.
This bias can diminish the serendipity of recommendations and hinder the system's ability to capture users' long-term interests.
We propose a simple yet effective novel metric specifically designed to quantify recency bias.
arXiv Detail & Related papers (2024-09-15T13:02:50Z) - Probabilistic load forecasting with Reservoir Computing [10.214379018902914]
This work focuses on reservoir computing as the core time series forecasting method.
While the RC literature mostly focused on point forecasting, this work explores the compatibility of some popular uncertainty quantification methods with the reservoir setting.
arXiv Detail & Related papers (2023-08-24T15:07:08Z) - Uncertainty Calibration for Counterfactual Propensity Estimation in Recommendation [22.67361489565711]
inverse propensity score (IPS) is employed to weight the prediction error of each observed instance.
IPS-based recommendations are hampered by miscalibration in propensity estimation.
We introduce a model-agnostic calibration framework for propensity-based debiasing of CVR predictions.
arXiv Detail & Related papers (2023-03-23T00:42:48Z) - Restricted Bernoulli Matrix Factorization: Balancing the trade-off
between prediction accuracy and coverage in classification based
collaborative filtering [45.335821132209766]
We propose Restricted Bernoulli Matrix Factorization (ResBeMF) to enhance the performance of classification-based collaborative filtering.
The proposed model provides a good balance in terms of the quality measures used compared to other recommendation models.
arXiv Detail & Related papers (2022-10-05T13:48:19Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - Quantifying Availability and Discovery in Recommender Systems via
Stochastic Reachability [27.21058243752746]
We propose an evaluation procedure based on reachability to quantify the maximum probability of recommending a target piece of content to a user.
reachability can be used to detect biases in the availability of content and diagnose limitations in the opportunities for discovery granted to users.
We demonstrate evaluations of recommendation algorithms trained on large datasets of explicit and implicit ratings.
arXiv Detail & Related papers (2021-06-30T16:18:12Z) - Confidence-Budget Matching for Sequential Budgeted Learning [69.77435313099366]
We formalize decision-making problems with querying budget.
We consider multi-armed bandits, linear bandits, and reinforcement learning problems.
We show that CBM based algorithms perform well in the presence of adversity.
arXiv Detail & Related papers (2021-02-05T19:56:31Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.