Beyond Satisfaction: From Placebic to Actionable Explanations For Enhanced Understandability
- URL: http://arxiv.org/abs/2512.06591v1
- Date: Sat, 06 Dec 2025 23:06:18 GMT
- Title: Beyond Satisfaction: From Placebic to Actionable Explanations For Enhanced Understandability
- Authors: Joe Shymanski, Jacob Brue, Sandip Sen,
- Abstract summary: This paper critiques the overreliance on user satisfaction metrics in evaluating explainability of machine learning systems.<n>We find that subjective surveys fail to capture whether explanations truly support users in building useful domain understanding.<n>We propose that future evaluations of agent explanation capabilities should integrate objective task performance metrics alongside subjective assessments to more accurately measure explanation quality.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) presents useful tools to facilitate transparency and trustworthiness in machine learning systems. However, current evaluations of system explainability often rely heavily on subjective user surveys, which may not adequately capture the effectiveness of explanations. This paper critiques the overreliance on user satisfaction metrics and explores whether these can differentiate between meaningful (actionable) and vacuous (placebic) explanations. In experiments involving optimal Social Security filing age selection tasks, participants used one of three protocols: no explanations, placebic explanations, and actionable explanations. Participants who received actionable explanations significantly outperformed the other groups in objective measures of their mental model, but users rated placebic and actionable explanations as equally satisfying. This suggests that subjective surveys alone fail to capture whether explanations truly support users in building useful domain understanding. We propose that future evaluations of agent explanation capabilities should integrate objective task performance metrics alongside subjective assessments to more accurately measure explanation quality. The code for this study can be found at https://github.com/Shymkis/social-security-explainer.
Related papers
- COMMUNITYNOTES: A Dataset for Exploring the Helpfulness of Fact-Checking Explanations [89.37527535663433]
We present a large-scale dataset of 104k posts with user-provided notes and helpfulness labels.<n>We propose a framework that automatically generates and improves reason definitions via automatic prompt optimization.<n>Our experiments show that the optimized definitions can improve both helpfulness and reason prediction.
arXiv Detail & Related papers (2025-10-28T05:28:47Z) - Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities [0.873811641236639]
We analyze a dataset of counterfactual explanations that were evaluated by 206 human participants.<n>We find that feasibility and trust stand out as the strongest predictors of user satisfaction.<n>Other metrics explain 58% of the variance, highlighting the importance of additional explanatory qualities.
arXiv Detail & Related papers (2025-04-07T11:09:25Z) - Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations [2.373992571236766]
We evaluate an explainable job recommender system using a realistic, task-based, mixed-design user study.
We find that providing stakeholders with real explanations does not significantly improve decision-making speed and accuracy.
arXiv Detail & Related papers (2024-09-24T11:03:17Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - How (Not) To Evaluate Explanation Quality [29.40729766120284]
We formulate desired characteristics of explanation quality that apply across tasks and domains.
We propose actionable guidelines to overcome obstacles that limit today's evaluation of explanation quality.
arXiv Detail & Related papers (2022-10-13T16:06:59Z) - Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI [10.151828072611428]
Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions.
We tested the effects of counterfactual and causal explanations on the objective accuracy of users predictions.
We also found that users understand explanations referring to categorical features more readily than those referring to continuous features.
arXiv Detail & Related papers (2022-04-21T15:01:09Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.