User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations
- URL: http://arxiv.org/abs/2308.00894v1
- Date: Wed, 2 Aug 2023 01:13:36 GMT
- Title: User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations
- Authors: Juntao Tan, Yingqiang Ge, Yan Zhu, Yinglong Xia, Jiebo Luo, Jianchao
Ji, Yongfeng Zhang
- Abstract summary: We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
- Score: 96.45414741693119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern recommender systems utilize users' historical behaviors to generate
personalized recommendations. However, these systems often lack user
controllability, leading to diminished user satisfaction and trust in the
systems. Acknowledging the recent advancements in explainable recommender
systems that enhance users' understanding of recommendation mechanisms, we
propose leveraging these advancements to improve user controllability. In this
paper, we present a user-controllable recommender system that seamlessly
integrates explainability and controllability within a unified framework. By
providing both retrospective and prospective explanations through
counterfactual reasoning, users can customize their control over the system by
interacting with these explanations.
Furthermore, we introduce and assess two attributes of controllability in
recommendation systems: the complexity of controllability and the accuracy of
controllability. Experimental evaluations on MovieLens and Yelp datasets
substantiate the effectiveness of our proposed framework. Additionally, our
experiments demonstrate that offering users control options can potentially
enhance recommendation accuracy in the future. Source code and data are
available at \url{https://github.com/chrisjtan/ucr}.
Related papers
- A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns [40.793466500324904]
We view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics.
Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them.
We propose two classes of such metrics:future- and past-reacheability and stability, that measure the ability of a user to influence their own and other users' recommendations.
arXiv Detail & Related papers (2024-09-20T04:37:36Z) - Editable User Profiles for Controllable Text Recommendation [66.00743968792275]
We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
arXiv Detail & Related papers (2023-04-09T14:52:18Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Online certification of preference-based fairness for personalized
recommender systems [20.875347023588652]
We assess the fairness of personalized recommender systems in the sense of envy-freeness.
We propose an auditing algorithm based on pure exploration and conservative constraints in multi-armed bandits.
arXiv Detail & Related papers (2021-04-29T17:45:27Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy [8.11839312231511]
Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
arXiv Detail & Related papers (2020-08-28T16:46:41Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.