Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
of Interpretable Machine Learning and their Needs
- URL: http://arxiv.org/abs/2101.09824v1
- Date: Sun, 24 Jan 2021 23:21:21 GMT
- Title: Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
of Interpretable Machine Learning and their Needs
- Authors: Harini Suresh, Steven R. Gomez, Kevin K. Nam, Arvind Satyanarayan
- Abstract summary: It is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them.
This paper eschews prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework.
We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu.
- Score: 6.381046244250263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To ensure accountability and mitigate harm, it is critical that diverse
stakeholders can interrogate black-box automated systems and find information
that is understandable, relevant, and useful to them. In this paper, we eschew
prior expertise- and role-based categorizations of interpretability
stakeholders in favor of a more granular framework that decouples stakeholders'
knowledge from their interpretability needs. We characterize stakeholders by
their formal, instrumental, and personal knowledge and how it manifests in the
contexts of machine learning, the data domain, and the general milieu. We
additionally distill a hierarchical typology of stakeholder needs that
distinguishes higher-level domain goals from lower-level interpretability
tasks. In assessing the descriptive, evaluative, and generative powers of our
framework, we find our more nuanced treatment of stakeholders reveals gaps and
opportunities in the interpretability literature, adds precision to the design
and comparison of user studies, and facilitates a more reflexive approach to
conducting this research.
Related papers
- Iterative Utility Judgment Framework via LLMs Inspired by Relevance in Philosophy [66.95501113584541]
Utility and topical relevance are critical measures in information retrieval.
We propose an Iterative utiliTy judgmEnt fraMework to promote each step of the cycle of Retrieval-Augmented Generation.
arXiv Detail & Related papers (2024-06-17T07:52:42Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.