Designerly Understanding: Information Needs for Model Transparency to
Support Design Ideation for AI-Powered User Experience
- URL: http://arxiv.org/abs/2302.10395v1
- Date: Tue, 21 Feb 2023 02:06:24 GMT
- Title: Designerly Understanding: Information Needs for Model Transparency to
Support Design Ideation for AI-Powered User Experience
- Authors: Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman
Vaughan
- Abstract summary: Designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials.
This limits their ability to ideate and make decisions about whether, where, and how to use AI.
Our study highlights the pivotal role that UX designers can play in Responsible AI.
- Score: 42.73738624139124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the widespread use of artificial intelligence (AI), designing user
experiences (UX) for AI-powered systems remains challenging. UX designers face
hurdles understanding AI technologies, such as pre-trained language models, as
design materials. This limits their ability to ideate and make decisions about
whether, where, and how to use AI. To address this problem, we bridge the
literature on AI design and AI transparency to explore whether and how
frameworks for transparent model reporting can support design ideation with
pre-trained models. By interviewing 23 UX practitioners, we find that
practitioners frequently work with pre-trained models, but lack support for
UX-led ideation. Through a scenario-based design task, we identify common goals
that designers seek model understanding for and pinpoint their model
transparency information needs. Our study highlights the pivotal role that UX
designers can play in Responsible AI and calls for supporting their
understanding of AI limitations through model transparency and interrogation.
Related papers
- Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - fAIlureNotes: Supporting Designers in Understanding the Limits of AI
Models for Computer Vision Tasks [32.53515595703429]
fAIlureNotes is a designer-centered failure exploration and analysis tool.
It supports designers in evaluating models and identifying failures across diverse user groups and scenarios.
arXiv Detail & Related papers (2023-02-22T23:41:36Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Investigating Explainability of Generative AI for Code through
Scenario-based Design [44.44517254181818]
generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering.
We conduct 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs.
Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.
arXiv Detail & Related papers (2022-02-10T08:52:39Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Towards A Process Model for Co-Creating AI Experiences [16.767362787750418]
Thinking of technology as a design material is appealing to designers.
As a material, AI resists this approach because its properties emerge as part of the design process itself.
We investigate the co-creation process through a design study with 10 pairs of designers and engineers.
arXiv Detail & Related papers (2021-04-15T16:53:34Z) - Question-Driven Design Process for Explainable AI User Experiences [12.883597052015109]
Designers are tasked with the challenges of how to select the most suitable XAI techniques and translate them into UX solutions.
We propose a Question-Driven Design Process that grounds the user needs, choices of XAI techniques, design, and evaluation of XAI UX all in the user questions.
We provide a mapping guide between prototypical user questions and exemplars of XAI techniques, serving as boundary objects to support collaboration between designers and AI engineers.
arXiv Detail & Related papers (2021-04-08T02:51:36Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.