PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers'
Workflows
- URL: http://arxiv.org/abs/2310.15435v1
- Date: Tue, 24 Oct 2023 01:04:27 GMT
- Title: PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers'
Workflows
- Authors: Savvas Petridis, Michael Terry, Carrie J. Cai
- Abstract summary: We investigate how coupling prompt and UI design affects designers' AI iteration.
Grounding this research, we developed PromptInfuser, a Figma plugin that enables users to create mockups.
In a study with 14 designers, we compare PromptInfuser to designers' current AI-prototyping workflow.
- Score: 23.386764579779538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prototyping AI applications is notoriously difficult. While large language
model (LLM) prompting has dramatically lowered the barriers to AI prototyping,
designers are still prototyping AI functionality and UI separately. We
investigate how coupling prompt and UI design affects designers' workflows.
Grounding this research, we developed PromptInfuser, a Figma plugin that
enables users to create semi-functional mockups, by connecting UI elements to
the inputs and outputs of prompts. In a study with 14 designers, we compare
PromptInfuser to designers' current AI-prototyping workflow. PromptInfuser was
perceived to be significantly more useful for communicating product ideas, more
capable of producing prototypes that realistically represent the envisioned
artifact, more efficient for prototyping, and more helpful for anticipating UI
issues and technical constraints. PromptInfuser encouraged iteration over
prompt and UI together, which helped designers identify UI and prompt
incompatibilities and reflect upon their total solution. Together, these
findings inform future systems for prototyping AI applications.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - On AI-Inspired UI-Design [5.969881132928718]
We discuss three major complementary approaches on how to use Artificial Intelligence (AI) to support app designers create better, more diverse, and creative UI of mobile apps.
First, designers can prompt a Large Language Model (LLM) like GPT to directly generate and adjust one or multiple UIs.
Second, a Vision-Language Model (VLM) enables designers to effectively search a large screenshot dataset, e.g. from apps published in app stores.
Third, a Diffusion Model (DM) specifically designed to generate app UIs as inspirational images.
arXiv Detail & Related papers (2024-06-19T15:28:21Z) - UIClip: A Data-driven Model for Assessing User Interface Design [20.66914084220734]
We develop a machine-learned model, UIClip, for assessing the design quality and visual relevance of a user interface.
We show how UIClip can facilitate downstream applications that rely on instantaneous assessment of UI design quality.
arXiv Detail & Related papers (2024-04-18T20:43:08Z) - Farsight: Fostering Responsible AI Awareness During AI Application Prototyping [32.235398722593544]
We present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping.
Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms.
We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers.
arXiv Detail & Related papers (2024-02-23T14:38:05Z) - The role of interface design on prompt-mediated creativity in Generative
AI [0.0]
We analyze more than 145,000 prompts from two Generative AI platforms.
We find that users exhibit a tendency towards exploration of new topics over exploitation of concepts visited previously.
arXiv Detail & Related papers (2023-11-30T22:33:34Z) - Interactive Text Generation [75.23894005664533]
We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users.
We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts.
arXiv Detail & Related papers (2023-03-02T01:57:17Z) - Optimizing Prompts for Text-to-Image Generation [97.61295501273288]
Well-designed prompts can guide text-to-image models to generate amazing images.
But the performant prompts are often model-specific and misaligned with user input.
We propose prompt adaptation, a framework that automatically adapts original user input to model-preferred prompts.
arXiv Detail & Related papers (2022-12-19T16:50:41Z) - Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation
with Large Language Models [116.25562358482962]
State-of-the-art neural language models can be used to solve ad-hoc language tasks without the need for supervised training.
PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts.
arXiv Detail & Related papers (2022-08-16T17:17:53Z) - VINS: Visual Search for Mobile User Interface Design [66.28088601689069]
This paper introduces VINS, a visual search framework, that takes as input a UI image and retrieves visually similar design examples.
The framework achieves a mean Average Precision of 76.39% for the UI detection and high performance in querying similar UI designs.
arXiv Detail & Related papers (2021-02-10T01:46:33Z) - BlackBox Toolkit: Intelligent Assistance to UI Design [9.749560288448114]
We propose to modify the UI design process by assisting it with artificial intelligence (AI)
We propose to enable AI to perform repetitive tasks for the designer while allowing the designer to take command of the creative process.
arXiv Detail & Related papers (2020-04-04T14:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.