User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations
- URL: http://arxiv.org/abs/2308.00894v1
- Date: Wed, 2 Aug 2023 01:13:36 GMT
- Title: User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations
- Authors: Juntao Tan, Yingqiang Ge, Yan Zhu, Yinglong Xia, Jiebo Luo, Jianchao
Ji, Yongfeng Zhang
- Abstract summary: We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
- Score: 96.45414741693119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern recommender systems utilize users' historical behaviors to generate
personalized recommendations. However, these systems often lack user
controllability, leading to diminished user satisfaction and trust in the
systems. Acknowledging the recent advancements in explainable recommender
systems that enhance users' understanding of recommendation mechanisms, we
propose leveraging these advancements to improve user controllability. In this
paper, we present a user-controllable recommender system that seamlessly
integrates explainability and controllability within a unified framework. By
providing both retrospective and prospective explanations through
counterfactual reasoning, users can customize their control over the system by
interacting with these explanations.
Furthermore, we introduce and assess two attributes of controllability in
recommendation systems: the complexity of controllability and the accuracy of
controllability. Experimental evaluations on MovieLens and Yelp datasets
substantiate the effectiveness of our proposed framework. Additionally, our
experiments demonstrate that offering users control options can potentially
enhance recommendation accuracy in the future. Source code and data are
available at \url{https://github.com/chrisjtan/ucr}.
Related papers
- Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.
For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - Whom do Explanations Serve? A Systematic Literature Survey of User Characteristics in Explainable Recommender Systems Evaluation [7.021274080378664]
We surveyed 124 papers in which recommender systems explanations were evaluated in user studies.
Our findings suggest that the results from the surveyed studies predominantly cover specific users.
We recommend actions to move toward a more inclusive and reproducible evaluation.
arXiv Detail & Related papers (2024-12-12T13:01:30Z) - A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns [40.793466500324904]
We view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics.
Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them.
We propose two classes of such metrics:future- and past-reacheability and stability, that measure the ability of a user to influence their own and other users' recommendations.
arXiv Detail & Related papers (2024-09-20T04:37:36Z) - Editable User Profiles for Controllable Text Recommendation [66.00743968792275]
We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
arXiv Detail & Related papers (2023-04-09T14:52:18Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Online certification of preference-based fairness for personalized
recommender systems [20.875347023588652]
We assess the fairness of personalized recommender systems in the sense of envy-freeness.
We propose an auditing algorithm based on pure exploration and conservative constraints in multi-armed bandits.
arXiv Detail & Related papers (2021-04-29T17:45:27Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.