Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
- URL: http://arxiv.org/abs/2402.08976v3
- Date: Thu, 20 Feb 2025 04:50:17 GMT
- Title: Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
- Authors: Chen Wang, Fangxin Wang, Ruocheng Guo, Yueqing Liang, Philip S. Yu,
- Abstract summary: In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics.
We propose textbfCPFT, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top-$K$ metrics.
- Score: 46.76846936581471
- License:
- Abstract: In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics. CE loss focuses on maximizing the confidence of the ground truth item, which is challenging to achieve universally across all users and sessions. It also overlooks the practical acceptability of ranking the ground truth item within the top-$K$ positions, a common metric in SRecsys. To address this limitation, we propose \textbf{CPFT}, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top-$K$ metrics. CPFT embeds CP principles into the training loop using differentiable proxy losses and computationally efficient calibration strategies, enabling the generation of high-confidence prediction sets. These sets focus on items with high relevance while maintaining robust coverage guarantees. Extensive experiments on five real-world datasets and four distinct sequential models demonstrate that CPFT improves precision metrics and confidence calibration. Our results highlight the importance of confidence-aware fine-tuning in delivering accurate, trustworthy recommendations that enhance user satisfaction.
Related papers
- Uncertainty-Aware Online Extrinsic Calibration: A Conformal Prediction Approach [4.683612295430957]
We present the first approach to integrate uncertainty awareness into online calibration, combining Monte Carlo Dropout with Conformal Prediction.
We demonstrate effectiveness across different visual sensor types, measuring performance with adapted metrics to evaluate the efficiency and reliability of the intervals.
We offer insights into the reliability of calibration estimates, which can greatly improve the robustness of sensor fusion in dynamic environments.
arXiv Detail & Related papers (2025-01-12T17:24:51Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - End-to-End Conformal Calibration for Optimization Under Uncertainty [32.844953018302874]
This paper develops an end-to-end framework to learn the uncertainty estimates for conditional optimization.
In addition, we propose to represent arbitrary convex uncertainty sets with partially convex neural networks.
Our approach consistently improves upon two-stage-then-optimize.
arXiv Detail & Related papers (2024-09-30T17:38:27Z) - Beyond Conformal Predictors: Adaptive Conformal Inference with Confidence Predictors [0.0]
Conformal prediction requires exchangeable data to ensure valid prediction sets at a user-specified significance level.
Adaptive conformal inference (ACI) was introduced to address this limitation.
We show that ACI does not require the use of conformal predictors; instead, it can be implemented with the more general confidence predictors.
arXiv Detail & Related papers (2024-09-23T21:02:33Z) - The Penalized Inverse Probability Measure for Conformal Classification [0.5172964916120902]
The work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness.
The work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.
arXiv Detail & Related papers (2024-06-13T07:37:16Z) - Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning [67.71952251641545]
GPTRec is an alternative to the Top-K model for item-by-item recommendations.
We show that GPTRec offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
Our experiments on two datasets show that GPTRec's Next-K generation approach offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
arXiv Detail & Related papers (2024-03-07T19:47:48Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Fast Entropy-Based Methods of Word-Level Confidence Estimation for
End-To-End Automatic Speech Recognition [86.21889574126878]
We show how per-frame entropy values can be normalized and aggregated to obtain a confidence measure per unit and per word.
We evaluate the proposed confidence measures on LibriSpeech test sets, and show that they are up to 2 and 4 times better than confidence estimation based on the maximum per-frame probability.
arXiv Detail & Related papers (2022-12-16T20:27:40Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.