Human Factors in Model Interpretability: Industry Practices, Challenges,
and Needs
- URL: http://arxiv.org/abs/2004.11440v2
- Date: Sat, 30 May 2020 12:10:43 GMT
- Title: Human Factors in Model Interpretability: Industry Practices, Challenges,
and Needs
- Authors: Sungsoo Ray Hong, Jessica Hullman, Enrico Bertini
- Abstract summary: We conduct interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models.
Based on our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models.
The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles.
- Score: 28.645803845464915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the use of machine learning (ML) models in product development and
data-driven decision-making processes became pervasive in many domains,
people's focus on building a well-performing model has increasingly shifted to
understanding how their model works. While scholarly interest in model
interpretability has grown rapidly in research communities like HCI, ML, and
beyond, little is known about how practitioners perceive and aim to provide
interpretability in the context of their existing workflows. This lack of
understanding of interpretability as practiced may prevent interpretability
research from addressing important needs, or lead to unrealistic solutions. To
bridge this gap, we conducted 22 semi-structured interviews with industry
practitioners to understand how they conceive of and design for
interpretability while they plan, build, and use their models. Based on a
qualitative analysis of our results, we differentiate interpretability roles,
processes, goals and strategies as they exist within organizations making heavy
use of ML models. The characterization of interpretability work that emerges
from our analysis suggests that model interpretability frequently involves
cooperation and mental model comparison between people in different roles,
often aimed at building trust not only between people and models but also
between people within the organization. We present implications for design that
discuss gaps between the interpretability challenges that practitioners face in
their practice and approaches proposed in the literature, highlighting possible
research directions that can better address real-world needs.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Towards a Unified Framework for Evaluating Explanations [0.6138671548064356]
We argue that explanations serve as mediators between models and stakeholders, whether for intrinsically interpretable models or opaque black-box models.
We illustrate these criteria, as well as specific evaluation methods, using examples from an ongoing study of an interpretable neural network for predicting a particular learner behavior.
arXiv Detail & Related papers (2024-05-22T21:49:28Z) - The Essential Role of Causality in Foundation World Models for Embodied AI [102.75402420915965]
Embodied AI agents will require the ability to perform new tasks in many different real-world environments.
Current foundation models fail to accurately model physical interactions and are therefore insufficient for Embodied AI.
The study of causality lends itself to the construction of veridical world models.
arXiv Detail & Related papers (2024-02-06T17:15:33Z) - Model-Agnostic Interpretation Framework in Machine Learning: A
Comparative Study in NBA Sports [0.2937071029942259]
We propose an innovative framework to reconcile the trade-off between model performance and interpretability.
Our approach is centered around modular operations on high-dimensional data, which enable end-to-end processing while preserving interpretability.
We have extensively tested our framework and validated its superior efficacy in achieving a balance between computational efficiency and interpretability.
arXiv Detail & Related papers (2024-01-05T04:25:21Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Evaluating the Interpretability of Generative Models by Interactive
Reconstruction [30.441247705313575]
We introduce a task to quantify the human-interpretability of generative model representations.
We find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches.
arXiv Detail & Related papers (2021-02-02T02:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.