A Plug-and-play Model-agnostic Embedding Enhancement Approach for Explainable Recommendation
- URL: http://arxiv.org/abs/2509.03130v1
- Date: Wed, 03 Sep 2025 08:32:20 GMT
- Title: A Plug-and-play Model-agnostic Embedding Enhancement Approach for Explainable Recommendation
- Authors: Yunqi Mi, Boyang Yan, Guoshuai Zhao, Jialie Shen, Xueming Qian,
- Abstract summary: RVRec is a plug-and-play model-agnostic embedding enhancement approach.<n>It can improve both personality and explainability of existing systems.
- Score: 35.78577182946339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing multimedia recommender systems provide users with suggestions of media by evaluating the similarities, such as games and movies. To enhance the semantics and explainability of embeddings, it is a consensus to apply additional information (e.g., interactions, contexts, popularity). However, without systematic consideration of representativeness and value, the utility and explainability of embedding drops drastically. Hence, we introduce RVRec, a plug-and-play model-agnostic embedding enhancement approach that can improve both personality and explainability of existing systems. Specifically, we propose a probability-based embedding optimization method that uses a contrastive loss based on negative 2-Wasserstein distance to learn to enhance the representativeness of the embeddings. In addtion, we introduce a reweighing method based on multivariate Shapley values strategy to evaluate and explore the value of interactions and embeddings. Extensive experiments on multiple backbone recommenders and real-world datasets show that RVRec can improve the personalization and explainability of existing recommenders, outperforming state-of-the-art baselines.
Related papers
- Balancing Semantic Relevance and Engagement in Related Video Recommendations [21.2575040646784]
Related video recommendations commonly use collaborative filtering (CF) driven by co-engagement signals.<n>This paper introduces a novel multi-objective retrieval framework to balance semantic relevance and user engagement.
arXiv Detail & Related papers (2025-07-12T21:04:25Z) - Interpretable Reward Modeling with Active Concept Bottlenecks [54.00085739303773]
We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning.<n>Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts.<n>We formalize an active learning strategy that dynamically acquires the most informative concept labels.
arXiv Detail & Related papers (2025-07-07T06:26:04Z) - Enhancing Recommendation Explanations through User-Centric Refinement [7.640281193938638]
We propose a novel paradigm that refines initial explanations generated by existing explainable recommender models.<n>Specifically, we introduce a multi-agent collaborative refinement framework based on large language models.
arXiv Detail & Related papers (2025-02-17T12:08:18Z) - Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.<n>For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method [60.364834418531366]
We propose five new evaluation metrics that comprehensively and accurately assess the performance of RRS.
We formulate the RRS from a causal perspective, formulating recommendations as bilateral interventions.
We introduce a reranking strategy to maximize matching outcomes, as measured by the proposed metrics.
arXiv Detail & Related papers (2024-08-19T07:21:02Z) - Sustainable techniques to improve Data Quality for training image-based explanatory models for Recommender Systems [2.9748898344267785]
We seek to provide better visual explanations to Recommender Systems (RS) aligning with the principles of Responsible AI.<n>We develop three novel strategies that focus on training Data Quality.<n>The integration of these strategies in three state-of-the-art explainability models increases 5% the performance in relevant ranking metrics of these visual-based RS explainability models without penalizing their practical long-term sustainability.
arXiv Detail & Related papers (2024-07-09T10:40:31Z) - Combining Embedding-Based and Semantic-Based Models for Post-hoc
Explanations in Recommender Systems [0.0]
This paper presents an approach that combines embedding-based and semantic-based models to generate post-hoc explanations in recommender systems.
The framework we defined aims at producing meaningful and easy-to-understand explanations, enhancing user trust and satisfaction, and potentially promoting the adoption of recommender systems across the e-commerce sector.
arXiv Detail & Related papers (2024-01-09T10:24:46Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Explainable Recommendation via Interpretable Feature Mapping and
Evaluation of Explainability [22.58823484394866]
Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata.
We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features.
arXiv Detail & Related papers (2020-07-12T23:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.