fAIlureNotes: Supporting Designers in Understanding the Limits of AI
Models for Computer Vision Tasks
- URL: http://arxiv.org/abs/2302.11703v1
- Date: Wed, 22 Feb 2023 23:41:36 GMT
- Title: fAIlureNotes: Supporting Designers in Understanding the Limits of AI
Models for Computer Vision Tasks
- Authors: Steven Moore, Q. Vera Liao, Hariharan Subramonyam
- Abstract summary: fAIlureNotes is a designer-centered failure exploration and analysis tool.
It supports designers in evaluating models and identifying failures across diverse user groups and scenarios.
- Score: 32.53515595703429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To design with AI models, user experience (UX) designers must assess the fit
between the model and user needs. Based on user research, they need to
contextualize the model's behavior and potential failures within their
product-specific data instances and user scenarios. However, our formative
interviews with ten UX professionals revealed that such a proactive discovery
of model limitations is challenging and time-intensive. Furthermore, designers
often lack technical knowledge of AI and accessible exploration tools, which
challenges their understanding of model capabilities and limitations. In this
work, we introduced a failure-driven design approach to AI, a workflow that
encourages designers to explore model behavior and failure patterns early in
the design process. The implementation of fAIlureNotes, a designer-centered
failure exploration and analysis tool, supports designers in evaluating models
and identifying failures across diverse user groups and scenarios. Our
evaluation with UX practitioners shows that fAIlureNotes outperforms today's
interactive model cards in assessing context-specific model performance.
Related papers
- Interactive Visual Assessment for Text-to-Image Generation Models [28.526897072724662]
We propose DyEval, a dynamic interactive visual assessment framework for generative models.
DyEval features an intuitive visual interface that enables users to interactively explore and analyze model behaviors.
Our framework provides valuable insights for improving generative models and has broad implications for advancing the reliability and capabilities of visual generation systems.
arXiv Detail & Related papers (2024-11-23T10:06:18Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms [91.19304518033144]
We aim to align vision models with human aesthetic standards in a retrieval system.
We propose a preference-based reinforcement learning method that fine-tunes the vision models to better align the vision models with human aesthetics.
arXiv Detail & Related papers (2024-06-13T17:59:20Z) - Content-Centric Prototyping of Generative AI Applications: Emerging
Approaches and Challenges in Collaborative Software Teams [2.369736515233951]
Our work aims to understand how collaborative software teams set up and apply design guidelines and values, iteratively prototype prompts, and evaluate prompts to achieve desired outcomes.
Our findings reveal a content-centric prototyping approach in which teams begin with the content they want to generate, then identify specific attributes, constraints, and values, and explore methods to give users the ability to influence and interact with those attributes.
arXiv Detail & Related papers (2024-02-27T17:56:10Z) - Designerly Understanding: Information Needs for Model Transparency to
Support Design Ideation for AI-Powered User Experience [42.73738624139124]
Designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials.
This limits their ability to ideate and make decisions about whether, where, and how to use AI.
Our study highlights the pivotal role that UX designers can play in Responsible AI.
arXiv Detail & Related papers (2023-02-21T02:06:24Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Interactive Model Cards: A Human-Centered Approach to Model
Documentation [20.880991026743498]
Deep learning models for natural language processing are increasingly adopted and deployed by analysts without formal training in NLP or machine learning.
The documentation intended to convey the model's details and appropriate use is tailored primarily to individuals with ML or NLP expertise.
We conduct a design inquiry into interactive model cards, which augment traditionally static model cards with affordances for exploring model documentation and interacting with the models themselves.
arXiv Detail & Related papers (2022-05-05T19:19:28Z) - Towards A Process Model for Co-Creating AI Experiences [16.767362787750418]
Thinking of technology as a design material is appealing to designers.
As a material, AI resists this approach because its properties emerge as part of the design process itself.
We investigate the co-creation process through a design study with 10 pairs of designers and engineers.
arXiv Detail & Related papers (2021-04-15T16:53:34Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.