Interactive Model Cards: A Human-Centered Approach to Model
Documentation
- URL: http://arxiv.org/abs/2205.02894v1
- Date: Thu, 5 May 2022 19:19:28 GMT
- Title: Interactive Model Cards: A Human-Centered Approach to Model
Documentation
- Authors: Anamaria Crisan, Margaret Drouhard, Jesse Vig, Nazneen Rajani
- Abstract summary: Deep learning models for natural language processing are increasingly adopted and deployed by analysts without formal training in NLP or machine learning.
The documentation intended to convey the model's details and appropriate use is tailored primarily to individuals with ML or NLP expertise.
We conduct a design inquiry into interactive model cards, which augment traditionally static model cards with affordances for exploring model documentation and interacting with the models themselves.
- Score: 20.880991026743498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models for natural language processing (NLP) are increasingly
adopted and deployed by analysts without formal training in NLP or machine
learning (ML). However, the documentation intended to convey the model's
details and appropriate use is tailored primarily to individuals with ML or NLP
expertise. To address this gap, we conduct a design inquiry into interactive
model cards, which augment traditionally static model cards with affordances
for exploring model documentation and interacting with the models themselves.
Our investigation consists of an initial conceptual study with experts in ML,
NLP, and AI Ethics, followed by a separate evaluative study with non-expert
analysts who use ML models in their work. Using a semi-structured interview
format coupled with a think-aloud protocol, we collected feedback from a total
of 30 participants who engaged with different versions of standard and
interactive model cards. Through a thematic analysis of the collected data, we
identified several conceptual dimensions that summarize the strengths and
limitations of standard and interactive model cards, including: stakeholders;
design; guidance; understandability & interpretability; sensemaking &
skepticism; and trust & safety. Our findings demonstrate the importance of
carefully considered design and interactivity for orienting and supporting
non-expert analysts using deep learning models, along with a need for
consideration of broader sociotechnical contexts and organizational dynamics.
We have also identified design elements, such as language, visual cues, and
warnings, among others, that support interactivity and make non-interactive
content accessible. We summarize our findings as design guidelines and discuss
their implications for a human-centered approach towards AI/ML documentation.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Interactive Topic Models with Optimal Transport [75.26555710661908]
We present EdTM, as an approach for label name supervised topic modeling.
EdTM models topic modeling as an assignment problem while leveraging LM/LLM based document-topic affinities.
arXiv Detail & Related papers (2024-06-28T13:57:27Z) - LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large
Language Models [31.426274932333264]
We present Comparator, a novel visual analytics tool for interactively analyzing results from automatic side-by-side evaluation.
The tool supports interactive for users to understand when and why a model performs better or worse than a baseline model.
arXiv Detail & Related papers (2024-02-16T09:14:49Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Eliciting Model Steering Interactions from Users via Data and Visual
Design Probes [8.45602005745865]
Domain experts increasingly use automated data science tools to incorporate machine learning (ML) models in their work but struggle to " codify" these models when they are incorrect.
For these experts, semantic interactions can provide an accessible avenue to guide and refine ML models without having to dive into its technical details.
This study examines how experts with a spectrum of ML expertise use semantic interactions to update a simple classification model.
arXiv Detail & Related papers (2023-10-12T20:34:02Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Leveraging Explanations in Interactive Machine Learning: An Overview [10.284830265068793]
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities.
This paper presents an overview of research where explanations are combined with interactive capabilities.
arXiv Detail & Related papers (2022-07-29T07:46:11Z) - Are Metrics Enough? Guidelines for Communicating and Visualizing
Predictive Models to Subject Matter Experts [7.768301998812552]
We describe an iterative study conducted with both subject matter experts and data scientists to understand the gaps in communication.
We derive a set of communication guidelines that use visualization as a common medium for communicating the strengths and weaknesses of a model.
arXiv Detail & Related papers (2022-05-11T19:40:24Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.