Mitigating Popularity Bias in Counterfactual Explanations using Large Language Models
- URL: http://arxiv.org/abs/2508.08946v1
- Date: Tue, 12 Aug 2025 13:57:36 GMT
- Title: Mitigating Popularity Bias in Counterfactual Explanations using Large Language Models
- Authors: Arjan Hasami, Masoud Mansoury,
- Abstract summary: We propose a pre-processing step that leverages large language models to filter out-of-character history items.<n>We find that it creates counterfactuals that are more closely aligned with each user's popularity preferences than ACCENT alone.
- Score: 1.7771454131646311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations (CFEs) offer a tangible and actionable way to explain recommendations by showing users a "what-if" scenario that demonstrates how small changes in their history would alter the system's output. However, existing CFE methods are susceptible to bias, generating explanations that might misalign with the user's actual preferences. In this paper, we propose a pre-processing step that leverages large language models to filter out-of-character history items before generating an explanation. In experiments on two public datasets, we focus on popularity bias and apply our approach to ACCENT, a neural CFE framework. We find that it creates counterfactuals that are more closely aligned with each user's popularity preferences than ACCENT alone.
Related papers
- Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation [2.379349092029744]
LLM-based explainable recommenders can produce explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences.<n>We formalize this failure mode and propose PURE, a preference-aware reasoning framework following a select-then-generate paradigm.<n>PURE selects a compact set of multi-hop item-centric reasoning paths that are both factually grounded and aligned with user preference structure, guided by user intent, specificity, and diversity to suppress generic, weakly personalized evidence.
arXiv Detail & Related papers (2026-03-03T15:24:51Z) - GenCI: Generative Modeling of User Interest Shift via Cohort-based Intent Learning for CTR Prediction [84.0125708499372]
We propose a generative user intent framework to model user preferences for click-through rate (CTR) prediction.<n>The framework first employs a generative model, trained with a next-item prediction objective, to proactively produce candidate interest cohorts.<n>A hierarchical candidate-aware network then injects this rich contextual signal into the ranking stage, refining them with cross-attention to align with both user history and the target item.
arXiv Detail & Related papers (2026-01-26T08:15:04Z) - Addressing Personalized Bias for Unbiased Learning to Rank [56.663619153713434]
Unbiased learning to rank (ULTR) aims to learn unbiased ranking models from biased user behavior logs.<n>We propose a novel user-aware inverse-propensity-score estimator for learning-to-rank objectives.
arXiv Detail & Related papers (2025-08-28T14:01:31Z) - DiffusionGS: Generative Search with Query Conditioned Diffusion in Kuaishou [20.440076123934684]
We propose DiffusionGS, a novel and scalable approach powered by generative models.<n>We formulate interest extraction as a conditional denoising task, where the user's query guides a conditional diffusion process.<n>We propose the User-aware Denoising Layer (UDL) to incorporate user-specific profiles into the optimization of attention distribution on the user's past actions.
arXiv Detail & Related papers (2025-08-25T07:46:51Z) - Addressing Correlated Latent Exogenous Variables in Debiased Recommender Systems [3.082385853653964]
Recommendation systems (RS) aim to provide personalized content, but they face a challenge in unbiased learning due to selection bias.<n>This paper proposes a learning algorithm based on likelihood to learn a prediction model.
arXiv Detail & Related papers (2025-06-09T07:50:21Z) - Variational Bayesian Personalized Ranking [39.24591060825056]
Variational BPR is a novel and easily implementable learning objective that integrates likelihood optimization, noise reduction, and popularity debiasing.<n>We introduce an attention-based latent interest prototype contrastive mechanism, replacing instance-level contrastive learning, to effectively reduce noise from problematic samples.<n> Empirically, we demonstrate the effectiveness of Variational BPR on popular backbone recommendation models.
arXiv Detail & Related papers (2025-03-14T04:22:01Z) - Preference Discerning with LLM-Enhanced Generative Retrieval [28.309905847867178]
We propose a new paradigm, which we term preference discerning.<n>In preference dscerning, we explicitly condition a generative sequential recommendation system on user preferences within its context.<n>We generate user preferences using Large Language Models (LLMs) based on user reviews and item-specific data.
arXiv Detail & Related papers (2024-12-11T18:26:55Z) - ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - Bridging User Dynamics: Transforming Sequential Recommendations with Schrödinger Bridge and Diffusion Models [49.458914600467324]
We introduce the Schr"odinger Bridge into diffusion-based sequential recommendation models, creating the SdifRec model.
We also propose an extended version of SdifRec called con-SdifRec, which utilizes user clustering information as a guiding condition.
arXiv Detail & Related papers (2024-08-30T09:10:38Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Learning to Counterfactually Explain Recommendations [14.938252589829673]
We propose a learning-based framework to generate counterfactual explanations.
To generate an explanation, we find the history subset predicted by the surrogate model that is most likely to remove the recommendation.
arXiv Detail & Related papers (2022-11-17T18:21:21Z) - Probabilistic and Variational Recommendation Denoising [56.879165033014026]
Learning from implicit feedback is one of the most common cases in the application of recommender systems.
We propose probabilistic and variational recommendation denoising for implicit feedback.
We employ the proposed DPI and DVAE on four state-of-the-art recommendation models and conduct experiments on three datasets.
arXiv Detail & Related papers (2021-05-20T08:59:44Z) - Counterfactual Explanations for Neural Recommenders [10.880181451789266]
We propose ACCENT, the first general framework for finding counterfactual explanations for neural recommenders.
We use ACCENT to generate counterfactual explanations for two popular neural models.
arXiv Detail & Related papers (2021-05-11T13:16:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.