Exploring Practitioner Perspectives On Training Data Attribution
Explanations
- URL: http://arxiv.org/abs/2310.20477v2
- Date: Wed, 22 Nov 2023 09:57:54 GMT
- Title: Exploring Practitioner Perspectives On Training Data Attribution
Explanations
- Authors: Elisa Nguyen, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh
- Abstract summary: We interviewed 10 practitioners to understand the possible usability of training data attribution explanations.
We found that training data quality is often the most important factor for high model performance in practice.
We urge the community to focus on the utility of TDA techniques from the human-machine collaboration perspective.
- Score: 20.45528493625083
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) aims to provide insight into opaque model reasoning to
humans and as such is an interdisciplinary field by nature. In this paper, we
interviewed 10 practitioners to understand the possible usability of training
data attribution (TDA) explanations and to explore the design space of such an
approach. We confirmed that training data quality is often the most important
factor for high model performance in practice and model developers mainly rely
on their own experience to curate data. End-users expect explanations to
enhance their interaction with the model and do not necessarily prioritise but
are open to training data as a means of explanation. Within our participants,
we found that TDA explanations are not well-known and therefore not used. We
urge the community to focus on the utility of TDA techniques from the
human-machine collaboration perspective and broaden the TDA evaluation to
reflect common use cases in practice.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport [46.91791643660991]
Deep learning models for multimodal expression recognition have reached remarkable performance in controlled laboratory environments.
These models struggle in the wild because of the unavailability and quality of modalities used for training.
In practice, only a subset of the training-time modalities may be available at test time.
Learning with privileged information enables models to exploit data from additional modalities that are only available during training.
arXiv Detail & Related papers (2024-01-27T19:44:15Z) - Standardizing Your Training Process for Human Activity Recognition
Models: A Comprehensive Review in the Tunable Factors [4.199844472131922]
We provide an exhaustive review of contemporary deep learning research in the field of wearable human activity recognition (WHAR)
Our findings suggest that a major trend is the lack of detail provided by model training protocols.
With insights from the analyses, we define a novel integrated training procedure tailored to the WHAR model.
arXiv Detail & Related papers (2024-01-10T17:45:28Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.