Spacewalker: Rapid UI Design Exploration Using Lightweight Markup
Enhancement and Crowd Genetic Programming
- URL: http://arxiv.org/abs/2102.09039v1
- Date: Wed, 17 Feb 2021 21:54:49 GMT
- Title: Spacewalker: Rapid UI Design Exploration Using Lightweight Markup
Enhancement and Crowd Genetic Programming
- Authors: Mingyuan Zhong, Gang Li, Yang Li
- Abstract summary: We present Spacewalker, a tool that allows designers to rapidly search a large design space for an optimal web UI.
Designers first annotate each attribute they want to explore in a typical HTML page, using a simple markup extension we designed.
Spacewalker then parses the annotated HTML specification, and intelligently generates and distributes various configurations of the web UI to crowd workers for evaluation.
- Score: 7.872888246498886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User interface design is a complex task that involves designers examining a
wide range of options. We present Spacewalker, a tool that allows designers to
rapidly search a large design space for an optimal web UI with integrated
support. Designers first annotate each attribute they want to explore in a
typical HTML page, using a simple markup extension we designed. Spacewalker
then parses the annotated HTML specification, and intelligently generates and
distributes various configurations of the web UI to crowd workers for
evaluation. We enhanced a genetic algorithm to accommodate crowd worker
responses from pairwise comparison of UI designs, which is crucial for
obtaining reliable feedback. Based on our experiments, Spacewalker allows
designers to effectively search a large design space of a UI, using the
language they are familiar with, and improve their design rapidly at a minimal
cost.
Related papers
- Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Design Spaces and How Software Designers Use Them: a sampler [2.2674718030662713]
"Design spaces" are used to describe the spectrum of available design alternatives.
We show how design spaces can serve designers as lenses to reduce the overall space of possibilities.
arXiv Detail & Related papers (2024-07-26T04:19:28Z) - Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models [81.6240188672294]
In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources.
We introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts.
Our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12% on Crello.
arXiv Detail & Related papers (2024-04-23T17:58:33Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers'
Workflows [23.386764579779538]
We investigate how coupling prompt and UI design affects designers' AI iteration.
Grounding this research, we developed PromptInfuser, a Figma plugin that enables users to create mockups.
In a study with 14 designers, we compare PromptInfuser to designers' current AI-prototyping workflow.
arXiv Detail & Related papers (2023-10-24T01:04:27Z) - Architext: Language-Driven Generative Architecture Design [1.393683063795544]
Architext enables design generation with only natural language prompts, given to large-scale Language Models, as input.
We conduct a thorough quantitative evaluation of Architext's downstream task performance, focusing on semantic accuracy and diversity for a number of pre-trained language models.
Architext models are able to learn the specific design task, generating valid residential layouts at a near 100% rate.
arXiv Detail & Related papers (2023-03-13T23:11:05Z) - Preference-Learning Emitters for Mixed-Initiative Quality-Diversity
Algorithms [0.6445605125467573]
In mixed-initiative co-creation tasks, it is important to provide multiple relevant suggestions to the designer.
We propose a general framework for preference-learning emitters (PLEs) and apply it to a procedural content generation task in the video game Space Engineers.
arXiv Detail & Related papers (2022-10-25T08:45:00Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z) - VINS: Visual Search for Mobile User Interface Design [66.28088601689069]
This paper introduces VINS, a visual search framework, that takes as input a UI image and retrieves visually similar design examples.
The framework achieves a mean Average Precision of 76.39% for the UI detection and high performance in querying similar UI designs.
arXiv Detail & Related papers (2021-02-10T01:46:33Z) - Scout: Rapid Exploration of Interface Layout Alternatives through
High-Level Design Constraints [19.91735675022113]
Scout helps designers explore alternatives through mixed-initiative interaction with high-level constraints and design feedback.
Scout formalizes low-level spatial constraints that a solver uses to generate potential layouts.
In an evaluation with 18 interface designers, we found that Scout: (1) helps designers create more spatially diverse layouts with similar quality to those created with a baseline tool; and (2) can help designers avoid a linear design process and quickly ideate layouts they do not believe they would have thought of on their own.
arXiv Detail & Related papers (2020-01-15T16:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.