Practitioners Versus Users: A Value-Sensitive Evaluation of Current
Industrial Recommender System Design
- URL: http://arxiv.org/abs/2208.04122v2
- Date: Sat, 27 Aug 2022 01:56:26 GMT
- Title: Practitioners Versus Users: A Value-Sensitive Evaluation of Current
Industrial Recommender System Design
- Authors: Zhilong Chen, Jinghua Piao, Xiaochong Lan, Hancheng Cao, Chen Gao,
Zhicong Lu, Yong Li
- Abstract summary: We focus on five values: recommendation quality, privacy, transparency, fairness, and trustworthiness.
Our results reveal the existence and sources of tensions between practitioners and users in terms of value interpretation, evaluation, and practice.
- Score: 27.448761282289585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems are playing an increasingly important role in alleviating
information overload and supporting users' various needs, e.g., consumption,
socialization, and entertainment. However, limited research focuses on how
values should be extensively considered in industrial deployments of
recommender systems, the ignorance of which can be problematic. To fill this
gap, in this paper, we adopt Value Sensitive Design to comprehensively explore
how practitioners and users recognize different values of current industrial
recommender systems. Based on conceptual and empirical investigations, we focus
on five values: recommendation quality, privacy, transparency, fairness, and
trustworthiness. We further conduct in-depth qualitative interviews with 20
users and 10 practitioners to delve into their opinions about these values. Our
results reveal the existence and sources of tensions between practitioners and
users in terms of value interpretation, evaluation, and practice, which provide
novel implications for designing more human-centric and value-sensitive
recommender systems.
Related papers
- The 1st Workshop on Human-Centered Recommender Systems [27.23807230278776]
This workshop aims to provide a platform for researchers to explore the development of Human-Centered Recommender Systems.
HCRS refers to the creation of recommender systems that prioritize human needs, values, and capabilities at the core of their design and operation.
In this workshop, topics will include, but are not limited to, robustness, privacy, transparency, fairness, diversity, accountability, ethical considerations, and user-friendly design.
arXiv Detail & Related papers (2024-11-22T06:46:41Z) - Pessimistic Evaluation [58.736490198613154]
We argue that evaluating information access systems assumes utilitarian values not aligned with traditions of information access based on equal access.
We advocate for pessimistic evaluation of information access systems focusing on worst case utility.
arXiv Detail & Related papers (2024-10-17T15:40:09Z) - Review-based Recommender Systems: A Survey of Approaches, Challenges and Future Perspectives [11.835903510784735]
Review-based recommender systems have emerged as a significant sub-field in this domain.
We present a categorization of these systems and summarize the state-of-the-art methods, analyzing their unique features, effectiveness, and limitations.
We propose potential directions for future research, including the integration of multimodal data, multi-criteria rating information, and ethical considerations.
arXiv Detail & Related papers (2024-05-09T05:45:18Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Consumer-side Fairness in Recommender Systems: A Systematic Survey of
Methods and Evaluation [1.4123323039043334]
Growing awareness of discrimination in machine learning methods motivated both academia and industry to research how fairness can be ensured in recommender systems.
For recommender systems, such issues are well exemplified by occupation recommendation, where biases in historical data may lead to recommender systems relating one gender to lower wages or to the propagation of stereotypes.
This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems.
arXiv Detail & Related papers (2023-05-16T10:07:41Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - What are you optimizing for? Aligning Recommender Systems with Human
Values [9.678391591582582]
We describe cases where real recommender systems were modified in the service of various human values.
We look to AI alignment work for approaches that could learn complex values directly from stakeholders.
arXiv Detail & Related papers (2021-07-22T21:52:43Z) - FEBR: Expert-Based Recommendation Framework for beneficial and
personalized content [77.86290991564829]
We propose FEBR (Expert-Based Recommendation Framework), an apprenticeship learning framework to assess the quality of the recommended content.
The framework exploits the demonstrated trajectories of an expert (assumed to be reliable) in a recommendation evaluation environment, to recover an unknown utility function.
We evaluate the performance of our solution through a user interest simulation environment (using RecSim)
arXiv Detail & Related papers (2021-07-17T18:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.