Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
- URL: http://arxiv.org/abs/2409.15971v1
- Date: Tue, 24 Sep 2024 11:03:17 GMT
- Title: Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
- Authors: Roan Schellingerhout, Francesco Barile, Nava Tintarev,
- Abstract summary: We evaluate an explainable job recommender system using a realistic, task-based, mixed-design user study.
We find that providing stakeholders with real explanations does not significantly improve decision-making speed and accuracy.
- Score: 2.373992571236766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increased use of information retrieval in recruitment, primarily through job recommender systems (JRSs), can have a large impact on job seekers, recruiters, and companies. As a result, such systems have been determined to be high-risk in recent legislature. This requires JRSs to be trustworthy and transparent, allowing stakeholders to understand why specific recommendations were made. To fulfill this requirement, the stakeholders' exact preferences and needs need to be determined. To do so, we evaluated an explainable job recommender system using a realistic, task-based, mixed-design user study (n=30) in which stakeholders had to make decisions based on the model's explanations. This mixed-methods evaluation consisted of two objective metrics - correctness and efficiency, along with three subjective metrics - trust, transparency, and usefulness. These metrics were evaluated twice per participant, once using real explanations and once using random explanations. The study included a qualitative analysis following a think-aloud protocol while performing tasks adapted to each stakeholder group. We find that providing stakeholders with real explanations does not significantly improve decision-making speed and accuracy. Our results showed a non-significant trend for the real explanations to outperform the random ones on perceived trust, usefulness, and transparency of the system for all stakeholder types. We determine that stakeholders benefit more from interacting with explanations as decision support capable of providing healthy friction, rather than as previously-assumed persuasive tools.
Related papers
- Towards a Signal Detection Based Measure for Assessing Information Quality of Explainable Recommender Systems [0.5371337604556311]
We develop an objective metric to evaluate Veracity: the information quality of explanations.<n>To assess the effectiveness of our proposed metric, we set up four cases with varying levels of information quality.
arXiv Detail & Related papers (2025-07-01T20:11:17Z) - Information Bargaining: Bilateral Commitment in Bayesian Persuasion [60.3761154043329]
We introduce a unified framework and a well-structured solution concept for long-term persuasion.<n>This perspective makes explicit the common knowledge of the game structure and grants the receiver comparable commitment capabilities.<n>The framework is validated through a two-stage validation-and-inference paradigm.
arXiv Detail & Related papers (2025-06-06T08:42:34Z) - OKRA: an Explainable, Heterogeneous, Multi-Stakeholder Job Recommender System [2.373992571236766]
We propose a novel explainable multi-stakeholder job recommender system using graph neural networks.
The proposed method is capable of providing both candidate- and company-side recommendations.
We find that OKRA performs substantially better than six baselines in terms of nDCG for two datasets.
arXiv Detail & Related papers (2025-03-17T14:12:51Z) - Understanding Fairness in Recommender Systems: A Healthcare Perspective [0.18416014644193066]
This paper explores the public's comprehension of fairness in healthcare recommendations.
We conducted a survey where participants selected from four fairness metrics.
Results suggest that a one-size-fits-all approach to fairness may be insufficient.
arXiv Detail & Related papers (2024-09-05T19:59:42Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System [0.0]
We identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency.
Our study shows that the choice of the explanation intelligibility types depends on the explanation goal and user type.
arXiv Detail & Related papers (2023-05-26T15:40:46Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
arXiv Detail & Related papers (2022-05-11T20:06:03Z) - Fairness and Transparency in Recommendation: The Users' Perspective [14.830700792215849]
We discuss user perspectives of fairness-aware recommender systems.
We propose three features that could improve user understanding of and trust in fairness-aware recommender systems.
arXiv Detail & Related papers (2021-03-16T00:42:09Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.