Creating User Interface Mock-ups from High-Level Text Descriptions with
Deep-Learning Models
- URL: http://arxiv.org/abs/2110.07775v1
- Date: Thu, 14 Oct 2021 23:48:46 GMT
- Title: Creating User Interface Mock-ups from High-Level Text Descriptions with
Deep-Learning Models
- Authors: Forrest Huang, Gang Li, Xin Zhou, John F. Canny, Yang Li
- Abstract summary: We introduce three deep-learning techniques to create low-fidelity UI mock-ups from a natural language phrase.
We quantitatively and qualitatively compare and contrast each method's ability in suggesting coherent, diverse and relevant UI design mock-ups.
- Score: 19.63933191791183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design process of user interfaces (UIs) often begins with articulating
high-level design goals. Translating these high-level design goals into
concrete design mock-ups, however, requires extensive effort and UI design
expertise. To facilitate this process for app designers and developers, we
introduce three deep-learning techniques to create low-fidelity UI mock-ups
from a natural language phrase that describes the high-level design goal (e.g.
"pop up displaying an image and other options"). In particular, we contribute
two retrieval-based methods and one generative method, as well as
pre-processing and post-processing techniques to ensure the quality of the
created UI mock-ups. We quantitatively and qualitatively compare and contrast
each method's ability in suggesting coherent, diverse and relevant UI design
mock-ups. We further evaluate these methods with 15 professional UI designers
and practitioners to understand each method's advantages and disadvantages. The
designers responded positively to the potential of these methods for assisting
the design process.
Related papers
- MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis [65.78359025027457]
MetaDesigner revolutionizes artistic typography by leveraging the strengths of Large Language Models (LLMs) to drive a design paradigm centered around user engagement.
A comprehensive feedback mechanism harnesses insights from multimodal models and user evaluations to refine and enhance the design process iteratively.
Empirical validations highlight MetaDesigner's capability to effectively serve diverse WordArt applications, consistently producing aesthetically appealing and context-sensitive results.
arXiv Detail & Related papers (2024-06-28T11:58:26Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.
Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.
We conduct extensive experiments and achieved state-of-the-art (SOTA) performance on public multi-modal layout generation benchmarks.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models [81.6240188672294]
In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources.
We introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts.
Our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12% on Crello.
arXiv Detail & Related papers (2024-04-23T17:58:33Z) - UIClip: A Data-driven Model for Assessing User Interface Design [20.66914084220734]
We develop a machine-learned model, UIClip, for assessing the design quality and visual relevance of a user interface.
We show how UIClip can facilitate downstream applications that rely on instantaneous assessment of UI design quality.
arXiv Detail & Related papers (2024-04-18T20:43:08Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - EGFE: End-to-end Grouping of Fragmented Elements in UI Designs with
Multimodal Learning [10.885275494978478]
Grouping fragmented elements can greatly improve the readability and maintainability of the generated code.
Current methods employ a two-stage strategy that introduces hand-crafted rules to group fragmented elements.
We propose EGFE, a novel method for automatically End-to-end Grouping Fragmented Elements via UI sequence prediction.
arXiv Detail & Related papers (2023-09-18T15:28:12Z) - Evaluation of Sketch-Based and Semantic-Based Modalities for Mockup
Generation [15.838427479984926]
Design mockups are essential instruments for visualizing and testing design ideas.
We present and evaluate two different modalities for generating mockups based on hand-drawn sketches.
Our results show that sketch-based generation was more intuitive and expressive, while semantic-based generative AI obtained better results in terms of quality and fidelity.
arXiv Detail & Related papers (2023-03-22T16:47:36Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z) - Evaluating Mixed-Initiative Procedural Level Design Tools using a
Triple-Blind Mixed-Method User Study [0.0]
A tool which generates levels using interactive evolutionary optimisation was designed for this study.
The tool identifies level design patterns in an initial hand-designed map and uses that information to drive an interactive optimisation algorithm.
A rigorous user study was designed which compared the experiences of designers using the mixed-initiative tool to designers who were given a tool which provided completely random level suggestions.
arXiv Detail & Related papers (2020-05-15T11:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.