An Empirical analysis on Transparent Algorithmic Exploration in
Recommender Systems
- URL: http://arxiv.org/abs/2108.00151v1
- Date: Sat, 31 Jul 2021 05:08:29 GMT
- Title: An Empirical analysis on Transparent Algorithmic Exploration in
Recommender Systems
- Authors: Kihwan Kim
- Abstract summary: We propose a new approach for feedback elicitation without any deception and compare our approach to the conventional mix-in approach for evaluation.
Our results indicated that users left significantly more feedback on items chosen for exploration with our interface.
- Score: 17.91522677924348
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: All learning algorithms for recommendations face inevitable and critical
trade-off between exploiting partial knowledge of a user's preferences for
short-term satisfaction and exploring additional user preferences for long-term
coverage. Although exploration is indispensable for long success of a
recommender system, the exploration has been considered as the risk to decrease
user satisfaction. The reason for the risk is that items chosen for exploration
frequently mismatch with the user's interests. To mitigate this risk,
recommender systems have mixed items chosen for exploration into a
recommendation list, disguising the items as recommendations to elicit feedback
on the items to discover the user's additional tastes. This mix-in approach has
been widely used in many recommenders, but there is rare research, evaluating
the effectiveness of the mix-in approach or proposing a new approach for
eliciting user feedback without deceiving users. In this work, we aim to
propose a new approach for feedback elicitation without any deception and
compare our approach to the conventional mix-in approach for evaluation. To
this end, we designed a recommender interface that reveals which items are for
exploration and conducted a within-subject study with 94 MTurk workers. Our
results indicated that users left significantly more feedback on items chosen
for exploration with our interface. Besides, users evaluated that our new
interface is better than the conventional mix-in interface in terms of novelty,
diversity, transparency, trust, and satisfaction. Finally, path analysis show
that, in only our new interface, exploration caused to increase user-centric
evaluation metrics. Our work paves the way for how to design an interface,
which utilizes learning algorithm based on users' feedback signals, giving
better user experience and gathering more feedback data.
Related papers
- Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems [3.990406494980651]
This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems.
By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration affect their recommendations.
arXiv Detail & Related papers (2024-09-10T23:58:27Z) - UOEP: User-Oriented Exploration Policy for Enhancing Long-Term User Experiences in Recommender Systems [7.635117537731915]
Reinforcement learning (RL) has gained traction for enhancing user long-term experiences in recommender systems.
Modern recommender systems exhibit distinct user behavioral patterns among tens of millions of items, which increases the difficulty of exploration.
We propose User-Oriented Exploration Policy (UOEP), a novel approach facilitating fine-grained exploration among user groups.
arXiv Detail & Related papers (2024-01-17T08:01:18Z) - Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - PIE: Personalized Interest Exploration for Large-Scale Recommender
Systems [0.0]
We present a framework for exploration in large-scale recommender systems to address these challenges.
Our methodology can be easily integrated into an existing large-scale recommender system with minimal modifications.
Our work has been deployed in production on Facebook Watch, a popular video discovery and sharing platform serving billions of users.
arXiv Detail & Related papers (2023-04-13T22:25:09Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Reward Uncertainty for Exploration in Preference-based Reinforcement
Learning [88.34958680436552]
We present an exploration method specifically for preference-based reinforcement learning algorithms.
Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward.
Our experiments show that exploration bonus from uncertainty in learned reward improves both feedback- and sample-efficiency of preference-based RL algorithms.
arXiv Detail & Related papers (2022-05-24T23:22:10Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users'
Feedback [62.997667081978825]
We present a novel approach for considering user feedback and evaluate it using three distinct strategies.
Despite a limited number of feedbacks returned by users (as low as 20% of the total), our approach obtains similar results to those of state of the art approaches.
arXiv Detail & Related papers (2020-09-16T07:32:51Z) - Exploration-Exploitation Motivated Variational Auto-Encoder for
Recommender Systems [1.52292571922932]
We introduce an exploitation-exploration motivated variational auto-encoder (XploVAE) to collaborative filtering.
To facilitate personalized recommendations, we construct user-specific subgraphs, which contain the first-order proximity capturing observed user-item interactions.
A hierarchical latent space model is utilized to learn the personalized item embedding for a given user, along with the population distribution of all user subgraphs.
arXiv Detail & Related papers (2020-06-05T17:37:46Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.