Calibrated Recommendations: Survey and Future Directions
- URL: http://arxiv.org/abs/2507.02643v1
- Date: Thu, 03 Jul 2025 14:08:10 GMT
- Title: Calibrated Recommendations: Survey and Future Directions
- Authors: Diego CorrĂȘa da Silva, Dietmar Jannach,
- Abstract summary: We provide a survey on the recent developments in the area of calibrated recommendations.<n>We review existing technical approaches for calibration and provide an overview on empirical and analytical studies on the effectiveness of calibration.
- Score: 7.72244880746496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The idea of calibrated recommendations is that the properties of the items that are suggested to users should match the distribution of their individual past preferences. Calibration techniques are therefore helpful to ensure that the recommendations provided to a user are not limited to a certain subset of the user's interests. Over the past few years, we have observed an increasing number of research works that use calibration for different purposes, including questions of diversity, biases, and fairness. In this work, we provide a survey on the recent developments in the area of calibrated recommendations. We both review existing technical approaches for calibration and provide an overview on empirical and analytical studies on the effectiveness of calibration for different use cases. Furthermore, we discuss limitations and common challenges when implementing calibration in practice.
Related papers
- Rethinking Early Stopping: Refine, Then Calibrate [49.966899634962374]
We present a novel variational formulation of the calibration-refinement decomposition.<n>We provide theoretical and empirical evidence that calibration and refinement errors are not minimized simultaneously during training.
arXiv Detail & Related papers (2025-01-31T15:03:54Z) - Optimizing Estimators of Squared Calibration Errors in Classification [2.3020018305241337]
We propose a mean-squared error-based risk that enables the comparison and optimization of estimators of squared calibration errors.<n>Our approach advocates for a training-validation-testing pipeline when estimating a calibration error.
arXiv Detail & Related papers (2024-10-09T15:58:06Z) - Calibration-Disentangled Learning and Relevance-Prioritized Reranking for Calibrated Sequential Recommendation [18.913912876509187]
Calibrated recommendation aims to maintain personalized proportions of categories within recommendations.
Previous methods typically leverage reranking algorithms to calibrate recommendations after training a model.
We propose LeapRec, a novel approach for the calibrated sequential recommendation.
arXiv Detail & Related papers (2024-08-04T22:23:09Z) - Beyond Static Calibration: The Impact of User Preference Dynamics on Calibrated Recommendation [3.324986723090369]
Calibration in recommender systems is an important performance criterion.
Standard methods for mitigating miscalibration typically assume that user preference profiles are static.
We conjecture that this approach can lead to recommendations that, while appearing calibrated, in fact, distort users' true preferences.
arXiv Detail & Related papers (2024-05-16T16:33:34Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Classifier Calibration: How to assess and improve predicted class
probabilities: a survey [10.587567878098444]
A well-calibrated classifier correctly quantifies the level of uncertainty or confidence associated with its instance-wise predictions.
This is essential for critical applications, optimal decision making, cost-sensitive classification, and for some types of context change.
arXiv Detail & Related papers (2021-12-20T03:50:55Z) - Estimating Expected Calibration Errors [1.52292571922932]
Uncertainty in probabilistics predictions is a key concern when models are used to support human decision making.
Most models are not intrinsically well calibrated, meaning that their decision scores are not consistent with posterior probabilities.
We build an empirical procedure to quantify the quality of $ECE$ estimators, and use it to decide which estimator should be used in practice for different settings.
arXiv Detail & Related papers (2021-09-08T08:00:23Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.